Rui Wang (RuiWang1998)

RuiWang1998

Geek Repo

Company:@HeliXonProtein

Home Page:ruiwang1998.com

Twitter:@ruiwang2017

Github PK Tool:Github PK Tool

Rui Wang's starred repositories

numpy

The fundamental package for scientific computing with Python.

Language:PythonLicense:NOASSERTIONStargazers:26343Issues:0Issues:0

CASP15

CASP15 performance benchmarking of the state-of-the-art protein structure prediction methods

License:GPL-3.0Stargazers:5Issues:0Issues:0

foldcomp

Compressing protein structures effectively with torsion angles

Language:C++License:GPL-3.0Stargazers:142Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:269Issues:0Issues:0

cutlass

CUDA Templates for Linear Algebra Subroutines

Language:C++License:NOASSERTIONStargazers:4528Issues:0Issues:0

tree

tree is a library for working with nested data structures

Language:PythonLicense:Apache-2.0Stargazers:907Issues:0Issues:0

s4

Structured state space sequence models

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:2098Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:9345Issues:0Issues:0

float8_experimental

This repository contains the experimental PyTorch native float8 training UX

Language:PythonLicense:BSD-3-ClauseStargazers:160Issues:0Issues:0

segment-anything-fast

A batched offline inference oriented version of segment-anything

Language:PythonLicense:Apache-2.0Stargazers:1114Issues:0Issues:0

opt_einsum

⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

Language:PythonLicense:MITStargazers:803Issues:0Issues:0

mistral-src

Reference implementation of Mistral AI 7B v0.1 model.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:8657Issues:0Issues:0

ProteinPretraining

Source code of PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream Applications.

Language:PythonLicense:MITStargazers:27Issues:0Issues:0

MS-AMP

Microsoft Automatic Mixed Precision Library

Language:PythonLicense:MITStargazers:452Issues:0Issues:0

PEER_Benchmark

PEER Benchmark, appear at NeurIPS 2022 Dataset and Benchmark Track (https://arxiv.org/abs/2206.02096)

Language:PythonLicense:Apache-2.0Stargazers:74Issues:0Issues:0

EATLM

Code for 'On Pre-trained Language Models For Antibody'

Language:PythonLicense:NOASSERTIONStargazers:27Issues:0Issues:0

Sophia

The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”

Language:PythonLicense:MITStargazers:885Issues:0Issues:0

ColabDesign

Making Protein Design accessible to all via Google Colab!

Language:PythonStargazers:492Issues:0Issues:0

Sophia

Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.

Language:PythonLicense:Apache-2.0Stargazers:361Issues:0Issues:0

MiniFold

MiniFold: Deep Learning for Protein Structure Prediction inspired by DeepMind AlphaFold algorithm

Language:Jupyter NotebookLicense:MITStargazers:198Issues:0Issues:0

OmniQuant

[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

Language:PythonLicense:MITStargazers:555Issues:0Issues:0

Baichuan-7B

A large-scale 7B pretraining language model developed by BaiChuan-Inc.

Language:PythonLicense:Apache-2.0Stargazers:5635Issues:0Issues:0
Language:PythonLicense:MITStargazers:141Issues:0Issues:0

dadaptation

D-Adaptation for SGD, Adam and AdaGrad

Language:PythonLicense:MITStargazers:482Issues:0Issues:0

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Language:PythonLicense:MITStargazers:9010Issues:0Issues:0

unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

Language:PythonLicense:MITStargazers:18314Issues:0Issues:0

ControlNet

Let us control diffusion models!

Language:PythonLicense:Apache-2.0Stargazers:27821Issues:0Issues:0

llama

Inference code for Llama models

Language:PythonLicense:NOASSERTIONStargazers:52894Issues:0Issues:0

Ankh

Ankh: Optimized Protein Language Model

Language:PythonLicense:NOASSERTIONStargazers:190Issues:0Issues:0

FlexGen

Running large language models on a single GPU for throughput-oriented scenarios.

Language:PythonLicense:Apache-2.0Stargazers:9000Issues:0Issues:0