dasdristanta13 / LLM-Lora-PEFT_accumulate

LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLM-Lora-PEFT_accumulate

Welcome to the LLM-Lora-PEFT_accumulate repository!

This repository contains implementations and experiments related to Large Language Models (LLMs) using PEFT (Parameter Efficient Fine Tuning), LORA (Low-Rank Adaptation of Large Language Models), and QLORA (Quantized LLMs with Low-Rank Adapters).

Loading a model in 8-bit precision can save up to 4x memory compared to full precision model

image

What does PEFT do?

You easily add adapters on a frozen 8-bit model thus reducing the memory requirements of the optimizer states, by training a small fraction of parameters

image

Resources

🌐 Websites

πŸ“Ί YouTube Videos

πŸ“„ Papers

πŸ™ GitHub Repositories

🐍 Python Notebooks

SWOT of LLMs

image Go to LLM Analysis with SWOT for more clarification.

About

LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!


Languages

Language:Jupyter Notebook 100.0%