ZW's repositories
rotor-capnp
mio based async stream for Cap'n Proto messages.
baidu-netdisk-high-speed
:zap: 百度网盘高速下载 Chrome 插件
incubator-mxnet
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
libhdfs3-feedstock
A conda-smithy repository for libhdfs3.
librdkafka-feedstock
A conda-smithy repository for librdkafka.
lingua
The most accurate natural language detection library for Java and the JVM, suitable for long and short text alike
protobuf-feedstock
A conda-smithy repository for protobuf.
python-confluent-kafka-feedstock
A conda-smithy repository for python-confluent-kafka.
tensorflow-DeepFM
Tensorflow implementation of DeepFM for CTR prediction.
tensorflow_recipes
Tensorflow conda recipes
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
tensorrtllm_backend
The Triton TensorRT-LLM Backend
text-generation-inference
Large Language Model Text Generation Inference
triton-inference-server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.