LouChao98's repositories

VLGAE

Official Implementation for CVPR 2022 paper "Unsupervised Vision-Language Parsing: Seamlessly Bridging Visual Scene Graphs with Language Structures via Dependency Relationships"

Language:PythonLicense:MITStargazers:23Issues:5Issues:2
Language:PythonStargazers:14Issues:0Issues:0
Language:PythonLicense:MITStargazers:2Issues:1Issues:0
Language:PythonStargazers:2Issues:1Issues:0

AutoCompressors

Adapting Language Models to Compress Long Contexts

Language:PythonStargazers:0Issues:0Issues:0

CoLT5-attention

Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Diffusion-LM

Diffusion-LM

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

graph_ensemble_learning

Graph Ensemble Learning

License:Apache-2.0Stargazers:0Issues:0Issues:0
License:NOASSERTIONStargazers:0Issues:0Issues:0

easy-oa

Chrome extension for OA sites like arxiv, openreivew: 1. PDF back to abstract page, 2. Rename PDF page with paper title.

Language:JavaScriptLicense:Apache-2.0Stargazers:0Issues:0Issues:0

easy-to-hard

Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

GaLore

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

License:Apache-2.0Stargazers:0Issues:0Issues:0

lambeq

A high-level Python library for Quantum Natural Language Processing

License:Apache-2.0Stargazers:0Issues:0Issues:0

landmark-attention

Landmark Attention: Random-Access Infinite Context Length for Transformers

License:Apache-2.0Stargazers:0Issues:0Issues:0

lightning

Build and train PyTorch models and connect them to the ML lifecycle using Lightning App templates, without handling DIY infrastructure, cost management, scaling, and other headaches.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

llama-moe

⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

lp-sparsemap

LP-SparseMAP: Differentiable sparse structured prediction in coarse factor graphs

License:MITStargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

non_neg

Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning

Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

parserllm

Use context-free grammars with an LLM

License:MITStargazers:0Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

picard

PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. PICARD is a ServiceNow Research project that was started at Element AI.

License:Apache-2.0Stargazers:0Issues:0Issues:0

Pushdown-Layers

Code for Pushdown Layers from our EMNLP 2023 paper

Stargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

stack-attention

Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"

Stargazers:0Issues:0Issues:0

transformer_grammars

Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)

License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0