piggy's starred repositories
VisualGLM-6B
Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
QASystemOnMedicalKG
A tutorial and implement of disease centered Medical knowledge graph and qa system based on it。知识图谱构建,自动问答,基于kg的自动问答。以疾病为中心的一定规模医药领域知识图谱,并以该知识图谱完成自动问答与分析服务。
Medical_NLP
Medical NLP Competition, dataset, large models, paper
MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
awesome-radiology-report-generation
A curated list of radiology report generation (medical report generation) and related areas. :-)
a-PyTorch-Tutorial-to-Image-Captioning
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
Faster-R-CNN-with-model-pretrained-on-Visual-Genome
Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101
object_relation_transformer
Implementation of the Object Relation Transformer for Image Captioning
show-edit-tell
Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020
Image-Caption
Using LSTM or Transformer to solve Image Captioning in Pytorch
Stack-Captioning
Stack-Captioning: Coarse-to-Fine Learning for Image Captioning
image-captioning
Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]
fairseq-image-captioning
Transformer-based image captioning extension for pytorch/fairseq
meshed-memory-transformer
Meshed-Memory Transformer for Image Captioning. CVPR 2020
transformer_image_caption
Image Captioning based on Bottom-Up and Top-Down Attention model
AttentioNN
All about attention in neural networks. Soft attention, attention maps, local and global attention and multi-head attention.
Residual-Attention-for-Video-Caption
work as my graduation project
Chinese-Chatbot-PyTorch-Implementation
:four_leaf_clover: Another Chinese chatbot implemented in PyTorch, which is the sub-module of intelligent work order processing robot. 👩🔧
Abstractive-Summarization
Implementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.