Mingyou Sung's repositories
Transformer-Various-Positional-Encoding
This project aims to implement the Transformer Encoder blocks using various Positional Encoding methods.
Multiclass-Focal-loss-pytorch
This is an implementation of multi-class focal loss in PyTorch.
Attention-Various-Positional-Encoding
This project aims to implement the Scaled-Dot-Product Attention layer and the Multi-Head Attention layer using various Positional Encoding methods.
Counterfactual-statement-classification-and-span-dectection-using-Multi-task-Stacked-Bi-LSTMs
This project aims to implement the Multi-task-Stacked-Bi-LSTMs applied in detecting the span of the counterfactual statement using ELMo Word Embedding and POS tags.
Depedency-Graph-Attention-Networks
This project aims to develop dependency-graph-attention-networks in order to represent the dependency relations of each word from given text utilizing masked self-attention. The output of the dependency-graph-attention-networks is the token-level representation of the sum of the token and its dependency.
Fact-Checking-on-news-dataset-using-BERT
This project aims to implement text classification architecture using pre-trained language model BERT.
Recurrent-Networks_LSTM-and-GRU
This project aims to implement LSTM and GRU.
Span-Detection-Using-LSTMs
This project aims to implement the LSTM and Bidirectional LSTM for span detection from given text.
awesome-adversarial-machine-learning
A curated list of awesome adversarial machine learning resources
awesome-deep-learning-papers
The most cited deep learning papers
Covid-19-Tweets-Sentiment-Analysis-using-BERT
This project aims to implement text classification architecture using pre-trained language model BERT.
DCGAN-tensorflow
A tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"
deep-learning
Repo for the Deep Learning Nanodegree Foundations program.
Depedency-Graph-Convolutional-Networks
This project aims to develop dependency graph convolutional networks in order to represent the dependency relations of each word from given text.
Matching-pre-trained-offset-and-preprocessed-offset
This project is to match the offset of the pre-trained word embeddings such as BERT, ELMo, and Glove to the preprocessed token offset and its dependencies in various ways of preprocessing method such as Stanza and Spacy. Therefore, the result of this can be applied to the graph neural networks without concern of its mismatch of the preprocessed token offset and pre-trained result offset.
nbviewer
Nbconvert as a webservice (rendering ipynb to static HTML)
NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
Sentiment-Polarity-Analysis
This project aims to implement the result of the sentence-level and word-level polarity of a given text.
sequence_tagging
Named Entity Recognition (LSTM + CRF) - Tensorflow
soynlp
한국어 자연어처리를 위한 파이썬 라이브러리입니다. 단어 추출/ 토크나이저 / 품사판별/ 전처리의 기능을 제공합니다.
Span-Detection-Using-GNNs
This project aims to implement the dependency-graph-convolutional-networks and dependency-graph-attention-networks for span detection from given text.
tacotron2
Tacotron 2 - PyTorch implementation with faster-than-realtime inference
Tokenization-Techniques
This project aims to implement word-based, character-based and subword-based tokenization techniques.
Using-output-from-the-specific-intermediate-layer-of-BERT
This project aims to customize BERT to use the output of the specific intermediate layer of pre-trained BERT for certain target tasks. The number of hidden layers is the parameter that can be specified by the user and if the parameter is larger than 12, from the 1st to 12th layer is replaced to the pre-trained BERT while from 13th layer is randomly initialized.
Word-Embedding-Methods
This project aims to implement various word embedding methods.