KJ_Kwanjai's starred repositories
t81_558_deep_learning
T81-558: Keras - Applications of Deep Neural Networks @Washington University in St. Louis
python_for_microscopists
https://www.youtube.com/channel/UC34rW-HtPJulxr5wp2Xa04w?sub_confirmation=1
machine-learning-articles
🧠💬 Articles I wrote about machine learning, archived from MachineCurve.com.
flow-forecast
Deep learning PyTorch library for time series forecasting, classification, and anomaly detection (originally for flood forecasting).
transformer-time-series-prediction
proof of concept for a transformer-based time series prediction model
transformer
Implementation of Transformer model (originally from Attention is All You Need) applied to Time Series.
attention-is-all-you-need-keras
A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need
multigraph_transformer
IEEE TNNLS 2021, transformer, multi-graph transformer, graph, graph classification, sketch recognition, sketch classification, free-hand sketch, official code of the paper "Multi-Graph Transformer for Free-Hand Sketch Recognition"
ConvTransformerTimeSeries
Convolutional Transformer for time series
transfer_learning_music
Transfer learning for music classification and regression tasks
ML-assignments
about Regression, Classification, CNN, RNN, Explainable AI, Adversarial Attack, Network Compression, Seq2Seq, GAN, Transfer Learning, Meta Learning, Life-long Learning, Reforcement Learning.
Seq2SeqSharp
Seq2SeqSharp is a tensor based fast & flexible deep neural network framework written by .NET (C#). It has many highlighted features, such as automatic differentiation, different network types (Transformer, LSTM, BiLSTM and so on), multi-GPUs supported, cross-platforms (Windows, Linux, x86, x64, ARM), multimodal model for text and images and so on.
Coursera_Deep_Learning_Specialization
Implementation of Logistic Regression, MLP, CNN, RNN & LSTM from scratch in python. Training of deep learning models for image classification, object detection, and sequence processing (including transformers implementation) in TensorFlow.
basic-dataset
a collection of Dataset from various sources
Deep-Learning-Algorithms
CNN, LSTM, RNN, GRU, DNN, BERT, Transformer, ULMFiT
Speech-Emotion-Analysis
Human emotions are one of the strongest ways of communication. Even if a person doesn’t understand a language, he or she can very well understand the emotions delivered by an individual. In other words, emotions are universal.The idea behind the project is to develop a Speech Emotion Analyzer using deep-learning to correctly classify a human’s different emotions, such as, neutral speech, angry speech, surprised speech, etc. We have deployed three different network architectures namely 1-D CNN, LSTMs and Transformers to carryout the classification task. Also, we have used two different feature extraction methodologies (MFCC & Mel Spectrograms) to capture the features in a given voice signal and compared the two in their ability to produce high quality results, especially in deep-learning models.
transformer_soc
Transformer neural network for state of charge estimation in Tensorflow
non-coding-DNA-classifier
Deep learning multi-label classifier of non-coding DNA sequences
Transfer-learning
Transfer Knowledge Learned from Multiple Domains for Time-series Data Prediction
learning-wavelets
Learning wavelet transforms for audio compression
accelerometer_data_filtering
using median filter and low pass filter from scipy lib
PhysicsAnalysis
Converts a CSV file containing linear acceleration data into a set of line graphs
RNN-and-Transformers
Sequence modeling course codes