Raymond Hernandez (rayhern)

rayhern

Geek Repo

Location:San Clemente, CA

Home Page:https://github.com/rayhern/

Twitter:@bizong

Github PK Tool:Github PK Tool

Raymond Hernandez's repositories

TheAnimalFarmAutoGardener

This is an auto gardener written for the gardening game on https://theanimal.farm. Python3.

Language:PythonLicense:MITStargazers:3Issues:0Issues:0
Language:TypeScriptStargazers:2Issues:0Issues:0

UniswapV2Python

This is my UniswapV2 controller class written in python to power my degeneracy.

Language:PythonLicense:MITStargazers:1Issues:1Issues:1

audio-datasets

open-source audio datasets

Stargazers:0Issues:0Issues:0

Bard

Reverse engineering of Google's Bard API

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Deep-Fake_First_Order_Model

This Repo consists of implementing First order motion model for making Deep Fakes. It is referenced from a video on youtube by Two Minute Papers about Deep Fakes. The code given by @AliaksandrSiarohin

License:NOASSERTIONStargazers:0Issues:0Issues:0

EdgeGPT

Reverse engineered API of Microsoft's Bing Chat AI

License:UnlicenseStargazers:0Issues:0Issues:0

FasterTransformer

Transformer related optimization, including BERT, GPT

License:Apache-2.0Stargazers:0Issues:0Issues:0

fastT5

⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.

License:Apache-2.0Stargazers:0Issues:0Issues:0

flash-gpt

Add Flash-Attention to Huggingface Models

License:MITStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

IPRotate_Burp_Extension

Extension for Burp Suite which uses AWS API Gateway to rotate your IP on every request.

Stargazers:0Issues:0Issues:0

lightning-text-classification

Minimalist implementation of a BERT Sentence Classifier with PyTorch Lightning, Transformers and PyTorch-NLP.

Stargazers:0Issues:0Issues:0

lightning-transformers

Flexible components pairing 🤗 Transformers with :zap: Pytorch Lightning

License:Apache-2.0Stargazers:0Issues:0Issues:0

llama-rs

Run LLaMA inference on CPU, with Rust 🦀🚀🦙

License:MITStargazers:0Issues:0Issues:0

llamafile

Distribute and run LLMs with a single file.

License:NOASSERTIONStargazers:0Issues:0Issues:0

LlamaGPTJ-chat

Simple chat program for LLaMa, GPT-J, and MPT models.

License:MITStargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

mpt-play

Command-line script for inferencing from models such as MPT-7B-Chat

License:MITStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

nft-armory

Simple tool to display, mint, and modify your Metaplex NFTs

License:MITStargazers:0Issues:0Issues:0

notebook-utils

Commonly used notebook functions that I use time and time again. This way I can clone it, and have my functions and classes. Mainly for Google Colab+ notebook instances.

Language:PythonStargazers:0Issues:1Issues:0

NVIDIAPyTorchLM

State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.

Stargazers:0Issues:0Issues:0

open-llm-leaderboard

Open LLM Leaderboard

Language:PythonStargazers:0Issues:0Issues:0

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

License:Apache-2.0Stargazers:0Issues:0Issues:0

solana-mass-transfer

Move all your tokens, NFTs, and SOL, to a new wallet

Stargazers:0Issues:0Issues:0

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

License:Apache-2.0Stargazers:0Issues:0Issues:0

transformer-ls

Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).

License:MITStargazers:0Issues:0Issues:0

TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference.

License:Apache-2.0Stargazers:0Issues:0Issues:0

universal-distillation

🧪Create domain-adapted language models by distilling from many pre-trained LMs

License:Apache-2.0Stargazers:0Issues:0Issues:0