HLearning

HLearning

Geek Repo

Location:ShangHai

Github PK Tool:Github PK Tool

HLearning's repositories

fisheye

fisheye image calibration

Language:PythonLicense:Apache-2.0Stargazers:509Issues:8Issues:3

unet_keras

unet_keras use image Semantic segmentation

Language:PythonLicense:MITStargazers:316Issues:4Issues:7

ai_papers

AI Papers

License:Apache-2.0Stargazers:254Issues:2Issues:0

DIYRefresh

下拉刷新框架

Language:SwiftLicense:MITStargazers:164Issues:5Issues:4

yolov4

YOLO4, TensorFlow, Keras

Language:PythonLicense:Apache-2.0Stargazers:1Issues:2Issues:0

llama

Inference code for LLaMA models

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

mlc-llm

Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

mlx

MLX: An array framework for Apple silicon

Language:C++License:MITStargazers:0Issues:0Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

triton

Development repository for the Triton language and compiler

Language:C++License:MITStargazers:0Issues:0Issues:0

tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

AutoAWQ

AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

ComputeLibrary-Review

The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.

License:MITStargazers:0Issues:0Issues:0