nartiz's starred repositories
unlimiformer
Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"
LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
CoLT5-attention
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
fold
🪁 A fast Adaptive Machine Learning library for Time-Series, that lets you build, deploy and update composite models easily. An order of magnitude speed-up, combined with flexibility and rigour. This is an internal project - documentation is not updated anymore and substantially differ from the current API.
llama-int8
Quantized inference code for LLaMA models
text-generation-inference
Large Language Model Text Generation Inference
Nonstationary_Transformers
Code release for "Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting" (NeurIPS 2022), https://arxiv.org/abs/2205.14415
connector-x
Fastest library to load data from DB to DataFrames in Rust and Python
numerai-benchmark
Python Code used in publications, for archival purposes only
symbolicai
Compositional Differentiable Programming Library
Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
awesome-chatgpt-prompts
This repo includes ChatGPT prompt curation to use ChatGPT better.
flash-attention
Fast and memory-efficient exact attention