Edenzzzz's repositories

Stable-Diffusion-for-book-cover-generation

Code runs on GPU with 12 GB memory. Fine-tuning Stable Diffusion on Goodread's best books dataset to test the model's transfer learning ability.

Language:PythonStargazers:5Issues:2Issues:0

Stable-Diffusion-Compositions-Analysis

Stable Diffusion Compositions Analysis

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:2Issues:0Issues:0

Attend-and-Excite

Official Implementation for "Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models" (SIGGRAPH 2023)

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

butterfly

Butterfly matrix multiplication in PyTorch

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ColossalAI

Making large AI models cheaper, faster and more accessible

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ControlNet_attn_map

Let us control diffusion models!

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

DETA

Detection Transformers with Assignment

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Edenzzzz

Config files for my GitHub profile.

Stargazers:0Issues:1Issues:0
Language:CSSLicense:MITStargazers:0Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

keras

Deep Learning for humans

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

m2

Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"

Language:AssemblyStargazers:0Issues:0Issues:0

Megatron-LM

Ongoing research training transformer models at scale

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

MLSys-Intro

Tests to get started on MLSys

Language:PythonStargazers:0Issues:0Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

nccl-tests

NCCL Tests

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

NeSVoR

NeSVoR is a package for GPU-accelerated slice-to-volume reconstruction.

Language:CudaLicense:MITStargazers:0Issues:0Issues:0

ocnn-pytorch

Octree-based Sparse Convolutional Neural Networks

License:MITStargazers:0Issues:0Issues:0

Partial_Distance_Correlation

This is the official GitHub for paper: On the Versatile Uses of Partial Distance Correlation in Deep Learning, in ECCV 2022

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

pyreft

ReFT: Representation Finetuning for Language Models

License:Apache-2.0Stargazers:0Issues:0Issues:0

qlora

QLoRA: Efficient Finetuning of Quantized LLMs

License:MITStargazers:0Issues:0Issues:0

ring-flash-attention

Ring attention implementation with flash attention

Stargazers:0Issues:0Issues:0

torch_fsdp_example

A usage example showing the benefit FSDP(ZeRO3) over default DDP.

Language:PythonStargazers:0Issues:0Issues:0

TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0