Ming Qin's repositories
awesome-active-learning
Hope you can find everything you need about active learning in this repository.
chatgpt_academic
科研工作专用ChatGPT拓展,特别优化学术Paper润色体验,支持自定义快捷按钮,支持markdown表格显示,Tex公式双显示,代码显示功能完善,新增本地Python工程剖析功能/自我剖析功能
clean-code-python
:bathtub: Clean Code concepts adapted for Python
CTR
Implementation of "Clustered Tree Regression to Learn Protein Energy Change with Mutated Amino Acid"
decision-transformer
Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.
disentanglement_lib
disentanglement_lib is an open-source library for research on learning disentangled representations.
DLKcat
Deep learning and Bayesian approach applied to enzyme turnover number for the improvement of enzyme-constrained genome-scale metabolic models (ecGEMs) reconstruction
esm
Evolutionary Scale Modeling (esm): Pretrained language models for proteins
FLIP
A collection of tasks to probe the effectiveness of protein sequence representations in modeling aspects of protein design
google-research
Google Research
idec-wiki
Repository for iDEC wiki. Mainly for Resources on Directed Evolution
Impractical_Python_Projects
Code & supporting files for chapters in book
latex_paper_writing_tips
Tips for Writing a Research Paper using LaTeX
Machine-learning-for-proteins
Listing of papers about machine learning for proteins.
MCMG
MCMG_V1
MLDE
A machine-learning package for navigating combinatorial protein fitness landscapes.
modAL
A modular active learning framework for Python
my-team-learning
我的Datawhale组队学习,在线阅读地址:https://relph1119.github.io/my-team-learning
RDE-PPI
:mountain: Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction (ICLR 2023)
revisit-bert-finetuning
For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).
stable-dreamfusion
A pytorch implementation of text-to-3D dreamfusion, powered by stable diffusion.
ToMe
A method to increase the speed and lower the memory footprint of existing vision transformers.