Kentaro Yoshioka's repositories
vision-transformers-cifar10
Let's train vision transformers (ViT) for cifar 10!
apple-lidar-stream
Stream Apple LiDAR (iPad/iPhone) data with open3d
benchmark-FP32-FP16-INT8-with-TensorRT
Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier
timm_speed_benchmark
Benchmark Benchmark Benchmark!
ODA-Object-Detection-ttA
ODA is a test-time-augmentation(TTA) tool for 2D object detectors. For use in Kaggle competitions.
Deep-Compression.Pytorch
Unofficial Pytorch implementation of Deep Compression in CIFAR10
youtube-stream-downloader
Download Youtube, Twitch, whatever videos with a Python script.
quantize_models_sandbox
quantize models like vit and mlp-mixer
ADC-Quantization.python
Simulate ADC quantization with python/numpy.
benchmark-mygpu-pytorch
benchmark my gpu, personal note
choka-analysis
釣りビジョンの釣果をスクレイピング→可視化
quantize-huggingface
Quantize Huggingface transformers like BERT :hugs:
benchmark-object-detectors
Benchmarking object detection libraries
citation-tracker
Track your citations on Google Scholar
deep_running
Deep Running
ebook-GPT-translator
Enjoy reading with your favorite style.
ffcv-imagenet
Train ImageNet *fast* in 500 lines of code with FFCV
IEICE-TimeBasedCurrentSource
laTex source of the paper: "Time-Based Current Source: A Highly Digital Robust Current Generator for Switched Capacitor Circuits"
lab_startup
研究室のPCのセットアップ資料
Swin-Transformer-Object-Detection
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.
TVLSI-VCO-comparator
tex code base of the paper "VCO-based Comparator: A Fully Adaptive Noise Scaling Comparator for High-Precision and Low-Power SAR ADCs"
UniTrack
Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).