LAION AI's repositories
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
CLIP_benchmark
CLIP-like model evaluation
scaling-laws-openclip
Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)
Desktop_BUD-E
BUD-E (Buddy) is an open-source voice assistant framework that facilitates seamless interaction with AI models and APIs, enabling the creation and integration of diverse skills for educational and research applications.
emotional-speech-annotations
This repository contains prompts & best practices to annotate audio clips with a very high degree of details using Audio-Language-Models
Desktop-BUD-E_V1.0
BUD-E (Buddy) is an open-source voice assistant framework that facilitates seamless interaction with AI models and APIs, enabling the creation and integration of diverse skills for educational and research applications.
school-bud-e-frontend-old
A frontend that is compatible to the school-bud-e-backend.
project-alexandria
Official repo for Project Alexandria
Megatron-LM-Open-Sci
MegaTron open-sci fork
annotate-collection
A repository with data for annotation.
open_clip_mammut
OpenCLIP fork with MaMMUT support
curiosit-e
File server for curiosit-e content.
bud-e-mobile
Mobile app development of all bud-e derivatives.
school-bud-e-frontend
School Bud-E is an intelligent and empathetic learning assistant designed to revolutionize the educational experience.
Admin_Bud-E
Admin Bud-E is a lightweight, privacy-first control center for AI chat, speech-to-text, and text-to-speech. Manage providers, routing, and costs with a simple Admin Console. Give users per-period credits, prices per model, and a shared Common Pool. EU-friendly via OpenAI-Format endüpoints or our optional Google Cloud Vertex proxy.
emonet-face
Official repository for the NeurIPS 2025 paper “EmoNet-Face: An Expert-Annotated Benchmark for Synthetic Emotion Recognition.” Includes a 40-category emotion taxonomy, balanced synthetic datasets, expert annotations, and baseline models for fair and reproducible evaluation.
transformers
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.