There are 17 repositories under huggingface-transformers topic.
🧠💬 Articles I wrote about machine learning, archived from MachineCurve.com.
Chronos: Pretrained Models for Probabilistic Time Series Forecasting
中文nlp解决方案(大模型、数据、模型、训练、推理)
Social networking platform with automated content moderation and context-based authentication system
Simple UI for LLM Model Finetuning
[CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.
Transformer models from BERT to GPT-4, environments from Hugging Face to OpenAI. Fine-tuning, training, and prompt engineering examples. A bonus section with ChatGPT, GPT-3.5-turbo, GPT-4, and DALL-E including jump starting GPT-4, speech-to-text, text-to-speech, text to image generation with DALL-E, Google Cloud AI,HuggingGPT, and more
Learn Cloud Applied Generative AI Engineering (GenEng) using OpenAI, Gemini, Streamlit, Containers, Serverless, Postgres, LangChain, Pinecone, and Next.js
Multimodal model for text and tabular data with HuggingFace transformers as building block for text data
[EMNLP 2022] Unifying and multi-tasking structured knowledge grounding with language models
Fast Inference Solutions for BLOOM
Serverless LLM Serving for Everyone.
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
A tool for generating function arguments and choosing what function to call with local LLMs
Low latency JSON generation using LLMs ⚡️
Sentiment analysis neural network trained by fine-tuning BERT, ALBERT, or DistilBERT on the Stanford Sentiment Treebank.
Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.
Extract knowledge from all information sources using gpt and other language models. Index and make Q&A session with information sources.
multilspy is a lsp client library in Python intended to be used to build applications around language servers.
Amazon SageMaker Local Mode Examples
Code and Data artifact for NeurIPS 2023 paper - "Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context". `multispy` is a lsp client library in Python intended to be used to build applications around language servers.
Phoneme Recognition using pre-trained models Wav2vec2, HuBERT and WavLM. Throughout this project, we compared specifically three different self-supervised models, Wav2vec (2019, 2020), HuBERT (2021) and WavLM (2022) pretrained on a corpus of English speech that we will use in various ways to perform phoneme recognition for different languages with a network trained with Connectionist Temporal Classification (CTC) algorithm.
Build and train state-of-the-art natural language processing models using BERT
Easy-Translate is a script for translating large text files with a SINGLE COMMAND. Easy-Translate is designed to be as easy as possible for beginners and as seamlesscustomizable and as possible for advanced users.
Shush is an app that deploys a WhisperV3 model with Flash Attention v2 on Modal and makes requests to it via a NextJS app
extending stable diffusion prompts with suitable style cues using text generation
A codebase that makes differentially private training of transformers easy.
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.