There are 12 repositories under huggingface-transformers topic.
๐ง ๐ฌ Articles I wrote about machine learning, archived from MachineCurve.com.
ไธญๆnlp่งฃๅณๆนๆก(ๅคงๆจกๅใๆฐๆฎใๆจกๅใ่ฎญ็ปใๆจ็)
Simple UI for LLM Model Finetuning
Social networking platform with automated content moderation and context-based authentication system
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and ๐ video, up to 5x faster than OpenAI CLIP and LLaVA ๐ผ๏ธ & ๐๏ธ
Transformer models from BERT to GPT-4, environments from Hugging Face to OpenAI. Fine-tuning, training, and prompt engineering examples. A bonus section with ChatGPT, GPT-3.5-turbo, GPT-4, and DALL-E including jump starting GPT-4, speech-to-text, text-to-speech, text to image generation with DALL-E, Google Cloud AI,HuggingGPT, and more
Multimodal model for text and tabular data with HuggingFace transformers as building block for text data
Fast Inference Solutions for BLOOM
[EMNLP 2022] Unifying and multi-tasking structured knowledge grounding with language models
Learn Cloud Applied Generative AI Engineering (GenEng) using OpenAI, Gemini, Streamlit, Containers, Serverless, Postgres, LangChain, Pinecone, and Next.js
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
Sentiment analysis neural network trained by fine-tuning BERT, ALBERT, or DistilBERT on the Stanford Sentiment Treebank.
Low latency JSON generation using LLMs โก๏ธ
AI-First Process Automation with Large [Language (LLMs) / Action (LAMs) / Multimodal (LMMs)] / Visual Language (VLMs)) Models
Extract knowledge from all information sources using gpt and other language models. Index and make Q&A session with information sources.
Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.
A tool for generating function arguments and choosing what function to call with local LLMs
Amazon SageMaker Local Mode Examples
Build and train state-of-the-art natural language processing models using BERT
extending stable diffusion prompts with suitable style cues using text generation
Phoneme Recognition using pre-trained models Wav2vec2, HuBERT and WavLM. Throughout this project, we compared specifically three different self-supervised models, Wav2vec (2019, 2020), HuBERT (2021) and WavLM (2022) pretrained on a corpus of English speech that we will use in various ways to perform phoneme recognition for different languages with a network trained with Connectionist Temporal Classification (CTC) algorithm.
Easy-Translate is a script for translating large text files with a SINGLE COMMAND. Easy-Translate is designed to be as easy as possible for beginners and as seamlesscustomizable and as possible for advanced users.
Indonesian Language Models and its Usage
Dreambooth implementation based on Stable Diffusion with minimal code.
A codebase that makes differentially private training of transformers easy.
KLUE ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ HuggingFace Transformers ํํ ๋ฆฌ์ผ
[2021 ํ๋ฏผ์ ์ ํ๊ตญ์ด ์์ฑโข์์ฐ์ด ์ธ๊ณต์ง๋ฅ ๊ฒฝ์ง๋ํ] ๋ํ์์ฝ ๋ถ๋ฌธ ์๋ผ๊ฟ๋ฌ๋ผ๊ฟ ํ์ ๋ํ์์ฝ ํ์ต ๋ฐ ์ถ๋ก ์ฝ๋๋ฅผ ๊ณต์ ํ๊ธฐ ์ํ ๋ ํฌ์ ๋๋ค.
MLOps for Vision Models (TensorFlow) from ๐ค Transformers with TensorFlow Extended (TFX)