Ludsfer's repositories
cpp_crypto_algos
C++ 20 Techniques for Algorithmic Trading
dotnet-db-samples
.NET code samples for Oracle database developers #OracleDotNet
pycodestyle
Simple Python style checker in one Python file
alx-higher_level_programming
My alx higher level programming
cccl
CUDA C++ Core Libraries
CodeMazeGuides
The main repository for all the Code Maze guides
concurrencpp
Modern concurrency for C++. Tasks, executors, timers and C++20 coroutines to rule them all
cuda-quantum
C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
DALI
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
DeepLearningExamples
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
Fuser
A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")
imgui
Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies
kaolin-wisp
NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).
MailKit
A cross-platform .NET library for IMAP, POP3, and SMTP.
Megatron-LM
Ongoing research training transformer models at scale
NeMo
NeMo: a framework for generative AI
NeMo-Aligner
Scalable toolkit for efficient model alignment
open-gpu-kernel-modules
NVIDIA Linux open GPU kernel module source
Path-Tracing-SDK
Real-time path tracing library and sample
pysystemtrade
Systematic Trading in python
spark-rapids-ml
Spark RAPIDS MLlib – accelerate Apache Spark MLlib with GPUs
stdexec
`std::execution`, the proposed C++ framework for asynchronous and parallel programming.
Stockfish
A free and strong UCI chess engine
tensorflow
An Open Source Machine Learning Framework for Everyone
timesync
TimeSync auto-reminder
TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
VulkanMemoryAllocator
Easy to integrate Vulkan memory allocation library