Peter Bernoulli's repositories

awesome-docking-KyGao

An awesome & curated list of docking papers

License:GPL-3.0Stargazers:0Issues:0Issues:0

awesome-phd-advice_PaulLiangCMU

Collection of advice for prospective and current PhD students

License:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

chatgpt-on-openie-with-PoloWitty

Evaluate chatgpt's performance on the Open Information Extraction task using CaRB dataset

Language:PythonStargazers:0Issues:0Issues:0

ContextualSP-microsoft

Multiple paper open-source codes of the Microsoft Research Asia DKI group

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

GNNPapers-thunlp

Must-read papers on graph neural networks (GNN)

Stargazers:0Issues:0Issues:0

GraphQ_IR-Semantic-Parsing-of-Graph-Query-Languages

A Unified Intermediate Representation for Graph Query Languages

Language:PythonStargazers:0Issues:0Issues:0

hello-world

The Big Bang

Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

KEAR-microsoft-QA

Official code for achieving human parity on CommonsenseQA with External Attention

Language:PythonStargazers:0Issues:0Issues:0

KENLG-Reading

Author: Wenhao Yu (wyu1@nd.edu). ACM Computing Survey'22. Reading list for knowledge-enhanced text generation, with a survey.

Stargazers:0Issues:0Issues:0

LocalGraphClustering

kfoynt/LocalGraphClustering

License:MITStargazers:0Issues:0Issues:0

PLMpapers_thunlp

Must-read Papers on pre-trained language models.

License:MITStargazers:0Issues:0Issues:0

PySyft

A library for answering questions using data you cannot see

Stargazers:0Issues:0Issues:0

RE-GCN-Lee-zix

This is the official code release of the following paper: Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang and Xueqi Cheng. Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning

Stargazers:0Issues:0Issues:0

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

License:Apache-2.0Stargazers:0Issues:0Issues:0

SuperGen-Neurips2022

[NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

License:Apache-2.0Stargazers:0Issues:0Issues:0