Xiang Li's repositories
PytorchInsight
a pytorch lib with state-of-the-art architectures, pretrained models and real-time updated results
mae_segmentation
reproduction of semantic segmentation using masked autoencoder (mae)
RecursiveMix-pytorch
Official Codes and Pretrained Models for RecursiveMix
CarDetectionExample
Sinovation Ventures Challenge
algorithm-base
专门为刚开始刷题的同学准备的算法基地,没有最细只有更细,立志用动画将晦涩难懂的算法说的通俗易懂!
deep-finance
Deep Learning for Finance
implus.github.io
implus' blog
PostDocRep
Document class for the submission of the Final Postdoctoral Report for the Institute of Theoretical Physics, Chinese Academy of Sciences
loss-landscape
Code for visualizing the loss landscape of neural nets
mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
MAE-pytorch
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners
mmclassification
OpenMMLab Image Classification Toolbox and Benchmark
mmdetection
Open MMLab Detection Toolbox and Benchmark
mmsegmentation
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
PlotNeuralNet
Latex code for making neural networks diagrams
PyTorch-Encoding
A PyTorch CV Toolkit
pytorch_image_classification
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet
robin_stocks
This is a library to use with Robinhood Financial App. It currently supports trading crypto-currencies, options, and stocks. In addition, it can be used to get real time ticker information, assess the performance of your portfolio, and can also get tax documents, total dividends paid, and more. More info at
SegmenTron
Support PointRend, Fast_SCNN, HRNet, Deeplabv3_plus(xception, resnet, mobilenet), ContextNet, FPENet, DABNet, EdaNet, ENet, Espnetv2, RefineNet, UNet, DANet, HRNet, DFANet, HardNet, LedNet, OCNet, EncNet, DuNet, CGNet, CCNet, BiSeNet, PSPNet, ICNet, FCN, deeplab)
SimMIM
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities