tttyuntian's repositories
vlm_lexical_grounding
PyTorch code for the Findings of EMNLP 2021 paper "Does Vision-and-Language Pretraining Improve Lexical Grounding?"
vlm_primitive_concepts
Code for EMNLP 2023 paper "Emergence of Abstract State Representations in Embodied Sequence Modeling"
Statistics-Thesis
This is my statistics research about natural language processing with Dr. Nicole Dalzell.
abstract-state-seqmodel
Code for EMNLP 2023 paper "Emergence of Grounded Representations in Embodied Sequence Modeling"
abstract_state_seqmodel
Code for EMNLP 2023 paper "Emergence of Grounded Representations in Embodied Sequence Modeling"
ACRE
ACRE: Abstract Causal REasoning Beyond Covariation
ALBEF
Code for ALBEF: a new vision-language pre-training method
babyai
BabyAI platform. A testbed for training agents to understand and execute language commands.
CLIP
Contrastive Language-Image Pretraining
czsl
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.
fix_stopping_criteria
A framework for few-shot evaluation of autoregressive language models.
gym-minigrid
Minimalistic gridworld package for OpenAI Gym
grounded-seqmodel
Code for EMNLP 2023 paper "Emergence of Grounded Representations in Embodied Sequence Modeling"
gym
A toolkit for developing and comparing reinforcement learning algorithms.
iGibson
A Simulation Environment to train Robots in Large Realistic Interactive Scenes
llama-recipes
Examples and recipes for Llama 2 model
NeuMesh
Code for "MeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing", ECCV 2022 Oral
othello_world
Emergent world representations: Exploring a sequence model trained on a synthetic task
RAVEN_FAIR
Balanced RAVEN dataset from the paper: 'Scale-Localized Abstract Reasoning'.
slot-attention
Implementation of Slot Attention from GoogleAI
SRAN
Stratified Rule-Aware Network for Abstract Visual Reasoning, AAAI 2021
text2mesh
3D mesh stylization driven by a text input in PyTorch
tttyuntian.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
ViLT
Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"
visualbert
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"