Xiao Hongri's repositories
grid_control
"国家电网调控AI创新大赛:电网运行组织智能安排"比赛方案
-
Implementation code for the paper "Graph Neural Network-Based Anomaly Detection in Multivariate Time Series" (AAAI 2021)
-PALM-5-4-
飞桨常规赛:PALM病理性近视病灶检测与分割 5月第4名方案基线方案
-PALM_project
飞桨常规赛:PALM眼底彩照中黄斑**凹定位 5月第4名方案
AL-based-FL-for-Multi-Task-Disaster-Detection-Model
We design a multi-task model for joint disaster classification and victim detection. We train the model using both the Centralized Learning (CL) and Federated Learning (FL) methods. We also tried Active Learning (AL) to see how it could help in reducing the labeling workload for disaster dataset. Lastly, we optimized the model using OpenVINO.
CNN-for-Paderborn-Bearing-Dataset
in Python
easy-rl
强化学习中文教程(蘑菇书),在线阅读地址:https://datawhalechina.github.io/easy-rl/
EMO
Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
ETSformer
PyTorch code for ETSformer: Exponential Smoothing Transformers for Time-series Forecasting 20240317
FedEM
Official code for "Federated Multi-Task Learning under a Mixture of Distributions" (NeurIPS'21)
federated-mtl
Code used in the experiments of the Master's thesis "Federated Multi-task Learning over Networked Data"
FEDformer
20240317
HarmoFL
[AAAI'22] HarmoFL: Harmonizing Local and Global Drifts in Federated Learning on Heterogeneous Medical Images
ImageBind
ImageBind One Embedding Space to Bind Them All
MTFL-For-Personalised-DNNs
Code for 'Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing', published in IEEE TPDS.
paddle_project_PALM1-
飞桨常规赛:PALM眼底彩照视盘探测与分割 5月第4名
paderborn_bearing
Package for preprocessing Paderborn Bearing dataset
PARL
A high-performance distributed training framework for Reinforcement Learning
pathformer
20240317
RL-Adventure-2
PyTorch0.4 implementation of: actor critic / proximal policy optimization / acer / ddpg / twin dueling ddpg / soft actor critic / generative adversarial imitation learning / hindsight experience replay
RWKV-LM-GPT
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
scaleformer
20240317
Time-Series-Library
A Library for Advanced Deep Time Series Models.
TrafficMonitor
这是一个用于显示当前网速、CPU及内存利用率的桌面悬浮窗软件,并支持任务栏显示,支持更换皮肤。
TranAD
[VLDB'22] Anomaly Detection using Transformers, self-conditioning and adversarial training.