Kye Gomez (kyegomez)

kyegomez

Geek Repo

Company:Swarms

Location:Palo Alto

Home Page:https://github.com/kyegomez/swarms

Twitter:@KyeGomezB

Github PK Tool:Github PK Tool

Kye Gomez's repositories

BitNet

Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch

Language:PythonLicense:MITStargazers:1533Issues:39Issues:36

swarms

The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework Join our Community: https://discord.com/servers/agora-999382051935506503

Language:PythonLicense:NOASSERTIONStargazers:1136Issues:30Issues:249

MultiModalMamba

A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.

Language:PythonLicense:MITStargazers:430Issues:8Issues:3

Gemini

The open source implementation of Gemini, the model that will "eclipse ChatGPT" by Google

Language:PythonLicense:MITStargazers:411Issues:12Issues:8

zeta

Build high-performance AI models with modular building blocks

Language:PythonLicense:Apache-2.0Stargazers:379Issues:4Issues:60

VisionMamba

Implementation of Vision Mamba from the paper: "Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model" It's 2.8x faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on high-res images

Language:PythonLicense:MITStargazers:353Issues:6Issues:17

awesome-multi-agent-papers

A compilation of the best multi-agent papers

RT-X

Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"

Language:PythonLicense:MITStargazers:156Issues:8Issues:6

Python-Package-Template

A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much much more

Language:ShellLicense:MITStargazers:125Issues:2Issues:0

Mixture-of-Depths

Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"

Language:PythonLicense:MITStargazers:56Issues:4Issues:2

Infini-attention

Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTORCH

Language:PythonLicense:MITStargazers:48Issues:3Issues:1

Lets-Verify-Step-by-Step

"Improving Mathematical Reasoning with Process Supervision" by OPENAI

Language:PythonLicense:Apache-2.0Stargazers:47Issues:3Issues:1

NeoSapiens

The next evolution of Agents

Language:PythonLicense:MITStargazers:44Issues:4Issues:2

Reka-Torch

Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch

Language:PythonLicense:MITStargazers:27Issues:2Issues:0

swarms-cloud

Deploy your autonomous agents to production grade environments with 99% Uptime Guarantee, Infinite Scalability, and self-healing.

Language:PythonLicense:MITStargazers:22Issues:4Issues:21

MM1

PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"

Language:PythonLicense:MITStargazers:21Issues:3Issues:0

MambaFormer

Implementation of MambaFormer in Pytorch ++ Zeta from the paper: "Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks"

Language:PythonLicense:MITStargazers:19Issues:4Issues:3

MLXTransformer

Simple Implementation of a Transformer in the new framework MLX by Apple

Language:PythonLicense:MITStargazers:19Issues:4Issues:0
Language:TypeScriptLicense:NOASSERTIONStargazers:19Issues:4Issues:29

BRAVE-ViT-Swarm

Implementation of the paper: "BRAVE : Broadening the visual encoding of vision-language models"

Language:PythonLicense:MITStargazers:17Issues:2Issues:1

MHMoE

Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch

Language:PythonLicense:MITStargazers:16Issues:2Issues:0

TeraGPT

Train a production grade GPT in less than 400 lines of code. Better than Karpathy's verison and GIGAGPT

Language:PythonLicense:MITStargazers:15Issues:3Issues:0

SimplifiedTransformers

SimplifiedTransformer simplifies transformer block without affecting training. Skip connections, projection parameters, sequential sub-blocks, and normalization layers are removed. Experimental results confirm similar training speed and performance.

Language:PythonLicense:MITStargazers:14Issues:3Issues:3

CELESTIAL-1

Omni-Modality Processing, Understanding, and Generation

Language:PythonLicense:Apache-2.0Stargazers:7Issues:4Issues:0

ShallowFF

Zeta implemantion of "Rethinking Attention: Exploring Shallow Feed-Forward Neural Networks as an Alternative to Attention Layers in Transformers"

Language:PythonLicense:MITStargazers:7Issues:3Issues:1

FRViT

An attempt to create the most accurate, reliable, and general vision transformers for facial recognition at scale.

Language:PythonLicense:MITStargazers:6Issues:3Issues:0

GiediPrime

An experimental architecture using Mixture of Attentions with sandwiched Maracron Feedforward's and other modules

Language:PythonLicense:MITStargazers:6Issues:3Issues:0

kyegomez

Advancing Humanity with Multi-Modality AI

synchro

Synchronize your requirement.txt and pyproject.toml at the bush of a button!

Language:PythonLicense:MITStargazers:6Issues:1Issues:0

AoA-torch

Implementation of Attention on Attention in Zeta

Language:PythonLicense:MITStargazers:4Issues:3Issues:0