Harman Singh's repositories
Multilingual-Question-Answering-NLP
Multilingual QnA - Submission to Chaii (Challenge in AI for India) competition on Kaggle
AdaptiveConsistency
Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs
clevr-dataset-gen
A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
CORA
This is the official implementation of NeurIPS 2021 "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval".
DIG
Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)
Elevater_Toolkit_IC
Toolkit for Elevater Benchmark
FLYP
Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models
GeNeVA
Code to train and evaluate the GeNeVA-GAN model for the GeNeVA task proposed in our ICCV 2019 paper "Tell, Draw, and Repeat: Generating and modifying images based on continual linguistic instruction"
GeNeVA_datasets
Scripts to generate the CoDraw and i-CLEVR datasets used for the GeNeVA task proposed in our ICCV 2019 paper "Tell, Draw, and Repeat: Generating and modifying images based on continual linguistic instruction"
harmandotpy.github.io
code for website
iclr2023-scores
A simple script to extract scores of publicly available ICLR submissions from OpenReview.
kilogram
The KiloGram Tangrams dataset
lm-template
Pytorch lightning + hydra + neptune template for LM finetuning
metaprompt
Meta-prompt: a simple self-improving language agent
MQuAKE
MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
nfn
NF-Layers as described in this paper: https://arxiv.org/abs/2302.14040
open_clip
An open source implementation of CLIP.
Oscar
Oscar and VinVL
Q-learning-RL_Assignment3
Implementation of Q learning for the Assignment 3 of MDP and RL course
RobustLR
A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners
setGPU
Small Python library to automatically set CUDA_VISIBLE_DEVICES to the least loaded device on multi-GPU systems.
submitit
Python 3.6+ toolbox for submitting jobs to Slurm
sugar-crepe
A faithful benchmark for vision-language compositionality
ViLT
Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"
wise-ft
Robust fine-tuning of zero-shot models