P-Lambda (p-lambda)

P-Lambda

p-lambda

Geek Repo

Location:Stanford, California

Github PK Tool:Github PK Tool

P-Lambda's repositories

wilds

A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.

Language:PythonLicense:MITStargazers:534Issues:20Issues:52

dsir

DSIR large-scale data selection framework for language model training

Language:PythonLicense:MITStargazers:194Issues:22Issues:7

jukemir

Perform transfer learning for MIR using Jukebox!

Language:ShellLicense:MITStargazers:162Issues:11Issues:13

verified_calibration

Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlight).

Language:PythonLicense:MITStargazers:132Issues:6Issues:16

incontext-learning

Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"

swords

The Stanford Word Substitution (Swords) Benchmark

in-n-out

Code for the ICLR 2021 Paper "In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness"

robust_tradeoff

Code for the ICML 2020 paper "Understanding and Mitigating the Tradeoff Between Robustness and Accuracy", Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. Paper available at https://arxiv.org/pdf/2002.10716.pdf.

Language:PythonLicense:MITStargazers:8Issues:4Issues:0

composed_finetuning

Code for the ICML 2021 paper "Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization" by Sang Michael Xie, Tengyu Ma, Percy Liang

Language:PythonLicense:MITStargazers:5Issues:2Issues:0

LinkBERT

[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

dragon

[NeurIPS 2022] DRAGON 🐲: Deep Bidirectional Language-Knowledge Graph Pretraining

License:Apache-2.0Stargazers:0Issues:0Issues:0