Dong Meng's repositories
tensorflow-tensorrt-utils
scripts to work with tftrt in tf1 and tf2
kubeflow-pipeline-nvidia-example
a kubeflow pipeline example with NVIDIA DL examples and Triton Inference Server
alphafold
Open source code for AlphaFold.
AnimeGANv2
[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime
BackgroundMattingV2
Real-Time High-Resolution Background Matting
beam
Apache Beam is a unified programming model for Batch and Streaming
compression
Data compression in TensorFlow
cuda-samples
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
data-science-blueprints
Systems that show how to accelerate modern machine learning and data processing
DeepFaceLab
DeepFaceLab is the leading software for creating deepfakes.
dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
google-research
Google Research
initialization-actions
Run in all nodes of your cluster before the cluster starts - lets you customize your cluster
merlin-on-gcp
An example of running NVidia Merlin on Google Cloud
model-viewer
Easily display interactive 3D models on the web and in AR!
NeMo-Megatron-Launcher
NeMo Megatron launcher and tools
nerf
Code release for NeRF (Neural Radiance Fields)
PerfKitBenchmarker
PerfKit Benchmarker (PKB) contains a set of benchmarks to measure and compare cloud offerings. The benchmarks use default settings to reflect what most users will see. PerfKit Benchmarker is licensed under the Apache 2 license terms. Please make sure to read, understand and agree to the terms of the LICENSE and CONTRIBUTING files before proceeding.
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
spark-rapids
Spark RAPIDS plugin - accelerate Apache Spark with GPUs
TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.