Alberto Ferrer (bet0x)

bet0x

User data from Github https://github.com/bet0x

Company:@rackspace

Location:Mexico

Home Page:http://www.barrahome.org

GitHub:@bet0x

Alberto Ferrer's repositories

transmla-converter

TransMLA: Multi-Head Latent Attention Converter

Language:PythonLicense:Apache-2.0Stargazers:5Issues:0Issues:0

ai-algorithms

First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting research papers.

Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

AutoDidact

Autonomously train research-agent LLMs on custom data using reinforcement learning and self-verification.

Stargazers:0Issues:0Issues:0

blurred-thoughts-SFT

Blurred-Thoughts Supervised-Finetuning (BT-SFT) is a new approach to fine-tuning language models, focusing on enhancing response diversity and creativity.

License:MITStargazers:0Issues:0Issues:0

CAG

Cache-Augmented Generation: A Simple, Efficient Alternative to RAG

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

chain-of-draft

Code and data for the Chain-of-Draft (CoD) paper

Stargazers:0Issues:0Issues:0

chonkie

🦛 CHONK your texts with Chonkie ✨ - The no-nonsense RAG chunking library

License:MITStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

docling-serve

Running Docling as an API service

License:MITStargazers:0Issues:0Issues:0

FastGPT

FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, letting you easily develop and deploy complex question-answering systems without the need for extensive setup or configuration.

License:NOASSERTIONStargazers:0Issues:0Issues:0

guardrails

Adding guardrails to large language models.

License:Apache-2.0Stargazers:0Issues:0Issues:0

haystack-rag-app

An example of a RAG backend plus UI

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

khoj

Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free.

License:AGPL-3.0Stargazers:0Issues:0Issues:0

LightRAG

"LightRAG: Simple and Fast Retrieval-Augmented Generation"

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

LLaDA

Official PyTorch implementation for "Large Language Diffusion Models"

License:MITStargazers:0Issues:0Issues:0

MCP-Bridge

A middleware to provide an openAI compatible endpoint that can call MCP tools

License:MITStargazers:0Issues:0Issues:0

mistral.rs

Blazingly fast LLM inference.

License:MITStargazers:0Issues:0Issues:0

nanoRLHF

RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.

License:MITStargazers:0Issues:0Issues:0

open-r1-multimodal

A fork to add multimodal model training to open-r1

License:Apache-2.0Stargazers:0Issues:0Issues:0

open-webui-mcp

User-friendly AI Interface (Supports Ollama, OpenAI API, ...)

License:BSD-3-ClauseStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

openwebui-migrator

Open WebUI Database Migrator

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

OpenWebUI-Tools

Tools for OpenWebUI

License:MITStargazers:0Issues:0Issues:0

R1-V

Witness the aha moment of VLM with less than $3.

Stargazers:0Issues:0Issues:0

R2R

The most advanced AI retrieval system. Containerized, Retrieval-Augmented Generation (RAG) with a RESTful API.

License:MITStargazers:0Issues:0Issues:0

R2R-Application

react + next.js dashboard for R2R: The most advanced AI retrieval system. Containerized, Retrieval-Augmented Generation (RAG) with a RESTful API.

License:MITStargazers:0Issues:0Issues:0

SoT

Official code repository for Sketch-of-Thought (SoT)

License:MITStargazers:0Issues:0Issues:0

unsloth-docker

Unsloth Training Environment

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

License:Apache-2.0Stargazers:0Issues:0Issues:0