There are 0 repository under unsloth topic.
PTIT's Major Project: Website Programming - This repo contains a chatbot for a clothing store. The chatbot acts as an employee with specific knowledge about clothing consultation, website support, and store information.
Materials for CSE Summer School Hackathon 2024
🎋🌿🌟 Instruction Fine-Tuning of Meta Llama 3.2-3B Instruct on Kannada Conversations 🌟🌿🎋 Tailoring the model to follow specific instructions in Kannada, enhancing its ability to generate relevant, context-aware responses based on conversational inputs. 🚀✨ Using the Kannada Instruct dataset for fine-tuning! Happy Finetuning🎇🎉
Cloning Yourself using your whatsapp chat history and training a model on it.
AstorAI is a user-friendly medical chatbot powered by Retrieval-Augmented Generation (RAG) and the advanced LLama 3 model. It offers real-time, accurate responses to a wide range of medical queries, ensuring privacy and security in every interaction. Designed for ease of use, AstorAI provides reliable health information on various topics 24/7.
🤖 AI of Pwo fine tuned on Llama-3.1-8B Instruct
Finetuning of Gemma-2 2B for structured output
ResurrectAI is an AI-driven chat application designed to bring the wisdom and knowledge of great historical personalities to life. Leveraging advanced language models and fine-tuning techniques, ResurrectAI enables users to interact with AI avatars of iconic figures, gaining access to their insights, guidance, and philosophical teaching in realtime
This repo offers scripts for fine-tuning LLaMA 3.1 models with QLoRA, running inference, and export models. It’s based on my experience building a custom chatbot, and I’m sharing it to help others fine-tune and deploy LLMs on consumer hardware with ease! 😊
In this we finetune Llama-3.2-3B-Instruct model for text generation using unsloth
we finetune unsloth llama model to extract mathematical fomulas in the images with optical character recognition(OCR)
In this we fine tune Llama-3.2-11B-Vision-Instruct model on unsloth/Radiology_mini dataset for Image captioning generation
This project demonstrates how to setup a complete Retrieval Augmented Generation (RAG) pipeline on medical data using Llama-3-8B model
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
This project is a Hindi Article Generator, developed using natural language processing (NLP) techniques to create contextually relevant and coherent articles in Hindi. The model was trained on a custom dataset collected via web scraping. The model has been fine-tuned on a dataset of Hindi news headlines and articles.
Chatbot for IIIT Nagpur using Fine Tuning and RAG
A fine-tuned LLM to solve homework questions ranging from maths to science and social science.
Open Source Verilog Copilot: Fine Tune LLM: Fine-Tuning LLM with QLoRA and VeriGen Dataset using Unsloth
Transforming data into datasets for LLM training. Choo choo
In this we generate NER ,Question Answering and text generation using unsloth and Llama-3.2-3B-Instruct model
In this we finetune Pixtral-12B-2409 model using unsloth for visual Question Answering(NLP Task)
This project demonstrates how to Fine-Tune Llama-3-8B model on medical data using NVIDIA T4 Tensor Core GPU
A fine-tuned Mistral-7B model integrated with an SQL database to handle general customer support questions and manage orders.
RAG Medical Assistant with Llama-3.1-8b model Fine-tuned on medical conversational dataset
Llama3 Fine-Tuning for ABAP using Unsloth 4-Bit QLoRA
⚙️ Fine-Tune 🦙 Llama 3.1, Phi-3.. Models on custom DataSet using 🕴️ unsloth & Saving to HuggingFace Hub
Fine-tuned the LLaMA 3 model to generate comprehensive blog posts. The model is trained on a blog dataset consisting of various blog topics to enhance its ability to produce relevant blogs.
Optimized language model research and development for solving Korean CSAT (College Scholastic Ability Test) problems, with a focus on Korean language and social studies sections.
This repository provides resources for fine-tuning various types of models using different techniques and frameworks.
Comprehensive exploration of LLMs, including cutting-edge techniques and tools such as parameter-efficient fine-tuning (PEFT), quantization, zero redundancy optimizers (ZeRO), fully sharded data parallelism (FSDP), DeepSpeed, and Huggingface accelerate.
Optimized language model research and development for solving Korean CSAT (College Scholastic Ability Test) problems, with a focus on Korean language and social studies sections.
This repository contains experiments on fine-tuning LLMs (Llama, Llama3.1, Gemma). It includes notebooks for model tuning, data preprocessing, and hyperparameter optimization to enhance model performance.
A fine-tuned Llama model designed to deliver empathetic responses, seamlessly integrated with sentiment analysis to deeply understand and adapt to emotional nuances.