Brando Miranda's repositories

ultimate-utils

Brando's utils

Language:PythonLicense:MITStargazers:11Issues:6Issues:11

ultimate-pycoq

a realiable python-coq

Language:PythonLicense:MITStargazers:5Issues:3Issues:3

pycoq

python API to coq-serapi

Language:CoqLicense:MITStargazers:4Issues:2Issues:17
Language:Jupyter NotebookLicense:MITStargazers:1Issues:2Issues:0

evaporate

This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes"

Language:PythonStargazers:1Issues:1Issues:0
Language:CoqLicense:GPL-3.0Stargazers:1Issues:2Issues:0
Language:ShellLicense:MITStargazers:0Issues:3Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

alpaca_farm

A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

cheerios

Formally verified Coq serialization library with support for extraction to OCaml

Language:CoqStargazers:0Issues:1Issues:0

gpt4all

gpt4all: a chatbot trained on a massive collection of clean assistant data including code, stories and dialogue

Language:PythonStargazers:0Issues:1Issues:0

jq

Command-line JSON processor

Language:CLicense:NOASSERTIONStargazers:0Issues:1Issues:0

langchain

⚡ Building applications with LLMs through composability ⚡

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

llama_index

LlamaIndex (GPT Index) is a data framework for your LLM applications

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

lm-evaluation-harness

A framework for few-shot evaluation of autoregressive language models.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

long_llama

LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:1Issues:0
Language:HTMLLicense:MITStargazers:0Issues:1Issues:0

Megatron-LM

Ongoing research training transformer models at scale

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

metalib

The Penn Locally Nameless Metatheory Library

Language:CoqLicense:NOASSERTIONStargazers:0Issues:1Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:1Issues:0

parsel

Code for Parsel 🐍 - generate complex programs with language models

Language:PythonStargazers:0Issues:1Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

Portal-to-ISAbelle

https://albertqjiang.github.io/Portal-to-ISAbelle/

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:1Issues:0

pytorch-meta-dataset

A non-official 100% PyTorch implementation of META-DATASET benchmark for few-shot classification

Language:PythonStargazers:0Issues:1Issues:24

software-foundations-solutions

My solutions to the software foundations book

Language:HTMLStargazers:0Issues:3Issues:0

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

tuning_playbook

A playbook for systematically maximizing the performance of deep learning models.

License:NOASSERTIONStargazers:0Issues:1Issues:0

x-transformers

A simple but complete full-attention transformer with a set of promising experimental features from various papers

Language:PythonLicense:MITStargazers:0Issues:1Issues:0