Eternal Reclaimer (kyegomez)

kyegomez

Geek Repo

Company:Automated Public Assistance Company

Location:Miami

Home Page:https://www.swarms.world/

Twitter:@KyeGomezB

Github PK Tool:Github PK Tool

Eternal Reclaimer's repositories

VisionMamba

Implementation of Vision Mamba from the paper: "Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model" It's 2.8x faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on high-res images

Language:PythonLicense:MITStargazers:217Issues:4Issues:4

PALM-E

Implementation of "PaLM-E: An Embodied Multimodal Language Model"

Language:PythonLicense:Apache-2.0Stargazers:201Issues:3Issues:10

swarms-pytorch

Swarming algorithms like PSO, Ant Colony, Sakana, and more in PyTorch 😊

Language:PythonLicense:MITStargazers:86Issues:3Issues:3

Python-Package-Template

A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much much more

Language:ShellLicense:MITStargazers:74Issues:1Issues:0

MambaByte

Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta

Language:PythonLicense:MITStargazers:68Issues:0Issues:0

MoE-Mamba

Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Zeta

Language:PythonLicense:MITStargazers:53Issues:0Issues:0

metnet3

An implementation of "metnet3" in Pytorch

Language:PythonLicense:MITStargazers:28Issues:3Issues:0

Mirasol

Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"

Language:PythonLicense:MITStargazers:15Issues:0Issues:0

MLXTransformer

Simple Implementation of a Transformer in the new framework MLX by Apple

Language:PythonLicense:MITStargazers:14Issues:3Issues:0

SwitchTransformers

Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity"

Language:PythonLicense:MITStargazers:14Issues:0Issues:0

TeraGPT

Train a production grade GPT in less than 400 lines of code. Better than Karpathy's verison and GIGAGPT

Language:PythonLicense:MITStargazers:13Issues:2Issues:0

USM

Implementation of Google's USM speech model in Pytorch

Language:PythonLicense:MITStargazers:12Issues:0Issues:0

FastFF

Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"

Language:PythonLicense:MITStargazers:11Issues:2Issues:1

M2PT

Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities"

Language:PythonLicense:MITStargazers:11Issues:2Issues:0

Qwen-VL

My personal implementation of the model from "Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities", they haven't released model code yet sooo...

Language:PythonLicense:MITStargazers:11Issues:2Issues:0

Poly

A Fluid, PolyMorphic,and shapeless Types Package that activates radically flexiblity and simplicity in your programs

Language:PythonLicense:MITStargazers:8Issues:1Issues:0

GATS

Implementation of GATS from the paper: "GATS: Gather-Attend-Scatter" in pytorch and zeta

Language:PythonLicense:MITStargazers:7Issues:0Issues:0

SoundStream

Implementation of SoundtStream from the paper: "SoundStream: An End-to-End Neural Audio Codec"

Language:PythonLicense:MITStargazers:7Issues:0Issues:0

CELESTIAL-1

Omni-Modality Processing, Understanding, and Generation

Language:PythonLicense:Apache-2.0Stargazers:6Issues:3Issues:0

FlamingoZeta

Flamingo Implemnted in Zeta + Pytorch primitives for high performance multi-modal learning

Language:PythonLicense:MITStargazers:6Issues:0Issues:0

Midas

Implementation of Midas from [Towards Robust Monocular Depth Estimation] in Pytorch and Zeta

Language:ShellLicense:MITStargazers:6Issues:1Issues:0

ShallowFF

Zeta implemantion of "Rethinking Attention: Exploring Shallow Feed-Forward Neural Networks as an Alternative to Attention Layers in Transformers"

Language:PythonLicense:MITStargazers:6Issues:2Issues:1

SwarmsDiscord

A discord bot that can do anything.

Language:PythonLicense:MITStargazers:6Issues:0Issues:0

GiediPrime

An experimental architecture using Mixture of Attentions with sandwiched Maracron Feedforward's and other modules

Language:PythonLicense:MITStargazers:5Issues:0Issues:0

TritonTransformer

Transformer Implementation in Triton

Language:PythonLicense:MITStargazers:4Issues:0Issues:0

AoA-torch

Implementation of Attention on Attention in Zeta

Language:PythonLicense:MITStargazers:3Issues:2Issues:0

AutoGPT

An experimental open-source attempt to make GPT-4 fully autonomous.

Language:JavaScriptLicense:MITStargazers:3Issues:0Issues:0

deepmind-research

This repository contains implementations and illustrative code to accompany DeepMind publications

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:2Issues:1Issues:0

gill

🐟 Code and models for the paper "Generating Images with Multimodal Language Models".

License:Apache-2.0Stargazers:1Issues:0Issues:0
Language:PythonStargazers:1Issues:1Issues:0