sqhao (HSQ79815)

HSQ79815

Geek Repo

Company:xiaohongshu

Location:beijing

Github PK Tool:Github PK Tool

sqhao's repositories

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

TensorRT

TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

Language:C++License:Apache-2.0Stargazers:1Issues:0Issues:0

onnx-tensorrt

ONNX-TensorRT: TensorRT backend for ONNX

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

openvino

OpenVINO™ Toolkit repository

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

openvino2tensorflow

This script converts the OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC) -> TFLite (NHWC). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

3d-photo-inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting

License:NOASSERTIONStargazers:0Issues:0Issues:0

backend

Common source, scripts and utilities for creating Triton backends.

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

cnpy

library to read/write .npy and .npz files in C/C++

License:MITStargazers:0Issues:0Issues:0
Language:CudaLicense:Apache-2.0Stargazers:0Issues:0Issues:0

flashinfer

FlashInfer: Kernel Library for LLM Serving

Language:CudaLicense:Apache-2.0Stargazers:0Issues:0Issues:0

gemma.cpp

lightweight, standalone C++ inference engine for Google's Gemma models.

License:Apache-2.0Stargazers:0Issues:0Issues:0

highway

Performance-portable, length-agnostic SIMD with runtime dispatch

License:Apache-2.0Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

onnx

Open standard for machine learning interoperability

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ONNX-Python-Examples

ONNX Python Examples

License:MITStargazers:0Issues:0Issues:0

onnx-simplifier

Simplify your onnx model

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimizer

Actively maintained ONNX Optimizer

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

perf-ninja

This is an online course where you can learn and master the skill of low-level performance analysis and tuning.

Stargazers:0Issues:0Issues:0

pillow-resize

Porting of Pillow resize method in C++ and OpenCV.

License:Apache-2.0Stargazers:0Issues:0Issues:0

prajna

a program language for AI infrastructure

License:NOASSERTIONStargazers:0Issues:0Issues:0

pybind11

Seamless operability between C++11 and Python

License:NOASSERTIONStargazers:0Issues:0Issues:0

rocketmq-client-cpp

Apache RocketMQ cpp client

License:Apache-2.0Stargazers:0Issues:0Issues:0

stable-diffusion.cpp

Stable Diffusion in pure C/C++

License:MITStargazers:0Issues:0Issues:0

stable-fast

Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.

License:MITStargazers:0Issues:0Issues:0

tensorRT_Pro

C++ library based on tensorrt integration

License:MITStargazers:0Issues:0Issues:0

treelite

model compiler for decision tree ensembles

License:Apache-2.0Stargazers:0Issues:0Issues:0

trt-samples-for-hackathon-cn

Simple samples for TensorRT programming

License:Apache-2.0Stargazers:0Issues:0Issues:0

TRTorch

PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

License:Apache-2.0Stargazers:0Issues:0Issues:0

whisper

Robust Speech Recognition via Large-Scale Weak Supervision

Language:PythonLicense:MITStargazers:0Issues:0Issues:0