There are 1 repository under fine-tune topic.
Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts
Fine-tune SAM (Segment Anything Model) for computer vision tasks such as semantic segmentation, matting, detection ... in specific scenarios
Code for finetuning AlexNet in TensorFlow >= 1.2rc0
ImageNet pre-trained models with batch normalization for the Caffe framework
Fine-tuning code for CLIP models
A curated list of open source repositories for AI Engineers
[SOTA] [92% acc] 786M-8k-44L-32H multi-instrumental music transformer with true full MIDI instruments range, efficient encoding, octo-velocity and outro tokens
Various installation guides for Large Language Models
Vision Transformers Needs Registers. And Gated MLPs. And +20M params. Tiny modality gap ensues!
BERT based pretrained model using SQuAD 2.0 Dataset for Question-Answering
Use FastSpeech2 and HiFi-GAN to easily perform end-to-end Korean speech synthesis.
A scraper for Substack article text content
DelphiMistralAI wrapper brings Mistral’s text-vision-audio models and agentic Conversations to Delphi, with chat, embeddings, Codestral codegen, fine-tuning, batching, moderation, async/await helpers and live request monitoring.
Domain Randomization Shape Detection
🚂 Fine-tune OpenAI models for text classification, question answering, and more
Training and fine-tuning flan-t5-small model based on provided text
Sparse Autoencoders (SAE) vs CLIP fine-tuning fun.
Official Implementation for the paper titled: "Counterfactual Disease Removal and Generation in Chest X-Rays Using Diffusion Models"
Fine tuning LLaMA-2 model on provided text data
Mistral model inference and fine-tune
Materials for CSE Summer School Hackathon 2024
Flan-t5 model fine tune LoRA and Langchain
Train neural network for a car to drive in simulator
Fine-tune wav2vec2-xls-r on data from low-resource-languages
[Bachelor Graduation Project] Use Xception model for face anti-spoofing
Vision Transformers Needs Registers. And Gated MLPs. And +20M params. Tiny modality gap ensues!
This repository contains the source code for the first and the second task of DeftEval 2020 competition, used by the University Politehnica of Bucharest (UPB) team to train and evaluate the models.
🌹[ICML 2024] Selecting Large Language Model to Fine-tune via Rectified Scaling Law
A repository consists of incremental learning algorithm such as LwF and iCaRL on CIFAR100 dataset on ResNet architecture.
In this we finetuned the Gemini model with our own medical NER dataset and used to recognize Name Entities
A very simple sveltekit template of chat with your own fine-tuned transformers models
Geometric Parametrization GmP-Inf-CLIP modification of: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A super memory-efficiency CLIP training scheme.