HanderRi

HanderRi

Geek Repo

Github PK Tool:Github PK Tool

HanderRi's starred repositories

Swin-Transformer

This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".

Language:PythonLicense:MITStargazers:13762Issues:0Issues:0

A-ViT

Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)

Language:PythonLicense:Apache-2.0Stargazers:148Issues:0Issues:0

HiSparse

High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS

Language:C++License:BSD-3-ClauseStargazers:79Issues:0Issues:0

SSR

SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)

Language:CStargazers:23Issues:0Issues:0

ViT-FPGA-TPU

FPGA based Vision Transformer accelerator (Harvard CS205)

Language:SystemVerilogStargazers:81Issues:0Issues:0

Xilinx-FPGA-HLS-PYNQ-ALVEO-Flow

Simple examples for FPGA design using Vivado HLS for high level synthesis and Vivado for bitstream generation.

Language:Jupyter NotebookLicense:MITStargazers:25Issues:0Issues:0

Kria-PYNQ

PYNQ support and examples for Kria SOMs

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:90Issues:0Issues:0

I-ViT

[ICCV 2023] I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference

Language:PythonLicense:Apache-2.0Stargazers:149Issues:0Issues:0

sift_pyocl

An implementation of SIFT on GPU with OpenCL

Language:PythonStargazers:83Issues:0Issues:0

FPGA_SIFT_Algorithm

Implementation of SIFT Algorithm on FPGA

Language:AssemblyStargazers:8Issues:0Issues:0

trans-fat

An FPGA Accelerator for Transformer Inference

Language:Jupyter NotebookStargazers:72Issues:0Issues:0

HLS_SIFT_subkernel

Sub part of the SIFT algorithm as a Vitis HLS accelerated kernel

Language:C++Stargazers:1Issues:0Issues:0

ezSIFT

ezSIFT: An easy-to-use standalone SIFT library written in C/C++

Language:C++License:Apache-2.0Stargazers:95Issues:0Issues:0

hls4ml

Machine learning on FPGAs using HLS

Language:C++License:Apache-2.0Stargazers:1250Issues:0Issues:0

SIFT

Real Time SIFT implementation on FPGA

Stargazers:1Issues:0Issues:0

SIFT-implementation-in-Verilog

Using Verilog to implement the SIFT algorithm into an FPGA for small robotic situations

Stargazers:36Issues:0Issues:0

PTQ4ViT

Post-Training Quantization for Vision transformers.

Language:PythonStargazers:182Issues:0Issues:0

ViTCoD

[HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Language:PythonLicense:Apache-2.0Stargazers:91Issues:0Issues:0

flower

A Comprehensive Dataflow Compiler for High-Level Synthesis

Language:CMakeLicense:MITStargazers:8Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:41Issues:0Issues:0

Transformer-Accelerator-Based-on-FPGA

You can run it on pynq z1. The repository contains the relevant Verilog code, Vivado configuration and C code for sdk testing. The size of the systolic array can be changed, now it is 16X16.

Language:VerilogStargazers:103Issues:0Issues:0

segment-anything-with-clip

Segment Anything combined with CLIP

Language:PythonLicense:Apache-2.0Stargazers:331Issues:0Issues:0

segment-anything

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:47189Issues:0Issues:0

FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer

Language:PythonLicense:Apache-2.0Stargazers:304Issues:0Issues:0

paper-reading

深度学习经典、新论文逐段精读

License:Apache-2.0Stargazers:26748Issues:0Issues:0

CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

Language:Jupyter NotebookLicense:MITStargazers:25452Issues:0Issues:0

xls

XLS: Accelerated HW Synthesis

Language:C++License:Apache-2.0Stargazers:1201Issues:0Issues:0

kria-vitis-platforms

Kria Vitis platforms and overlays

Language:SystemVerilogLicense:Apache-2.0Stargazers:87Issues:0Issues:0
License:MITStargazers:384Issues:0Issues:0

Edge-MoE

Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts

Language:C++Stargazers:83Issues:0Issues:0