There are 0 repository under causal-language-modeling topic.
Repository for My HuggingFace Natural Language Processing Projects
Auto-regressive causal language model for molecule (SMILES) and reaction template (SMARTS) generation based on the Hugging Face implementation of OpenAI's GPT-2 transformer decoder model
Transformers Intuition
A quick and easy way to interact with open-source LLMs.
This is the implementation of low rank adaptation (LoRA) which is a subset of parameter efficient fine tuning (PEFT).
Causal language modeling and intent classification using GPT-2.
An AI generated picturebook.
Rescoring Automatic Speech Recognition using Large Language Models
Fine-tuning (or training from scratch) the library models for language modeling on a text dataset for GPT, GPT-2, ALBERT, BERT, DitilBERT, RoBERTa, XLNet... GPT and GPT-2 are trained or fine-tuned using a causal language modeling (CLM) loss while ALBERT, BERT, DistilBERT and RoBERTa are trained or fine-tuned using a masked language modeling (MLM) loss.
Course materials for the Machine Learning for NLP course taught by Sameer Singh for the Cognitive Science summer school 2022.
Dataset and model fine-tuning for function calling