Andy Zhao (AndyJZhao)

AndyJZhao

Geek Repo

Company:Quebec AI Institute (Mila)

Location:Montreal, QC Canada

Home Page:https://andyjzhao.github.io/

Github PK Tool:Github PK Tool

Andy Zhao's repositories

HGSL

Source code of AAAI21-Heterogeneous Graph Structure Learning for Graph Neural Networks

Language:PythonLicense:Apache-2.0Stargazers:107Issues:3Issues:9
Language:PythonStargazers:17Issues:1Issues:0

ReasoningArxiv

Reasoning Arxiv feed, forked from MyArxiv

Language:CSSLicense:GPL-2.0Stargazers:1Issues:0Issues:0

alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

Language:PythonStargazers:0Issues:0Issues:0
Language:HTMLStargazers:0Issues:1Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

dgl_forked

Python package built to ease deep learning on graph, on top of existing DL frameworks.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:CSSLicense:GPL-2.0Stargazers:0Issues:1Issues:0

evo

DNA foundation modeling from molecular to genome scale

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

FastChat

An open platform for training, serving, and evaluating large languages. Release repo for Vicuna and FastChat-T5.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

guidance

A guidance language for controlling large language models.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

langchain

⚡ Building applications with LLMs through composability ⚡

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

llama

Inference code for LLaMA models

Language:PythonLicense:GPL-3.0Stargazers:0Issues:0Issues:0
Language:CSSLicense:GPL-2.0Stargazers:0Issues:0Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

NBFNet

Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

PandemicLLM

Code and Data for Adapting Large Language Models to Forecast Pandemics in Real-time: A COVID-19 Case Study.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

PEER_Benchmark

PEER Benchmark, appear at NeurIPS 2022 Dataset and Benchmark Track (https://arxiv.org/abs/2206.02096)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

picoGPT

An unnecessarily tiny implementation of GPT-2 in NumPy.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

QLoRA

QLoRA: Efficient Finetuning of Quantized LLMs

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

single-cell-best-practices

This project is work in progress! https://www.sc-best-practices.org

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

TLM

ICML'2022: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

torchdrug-dev

Private annotated version of torchdrug

License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:HTMLStargazers:0Issues:0Issues:0