KJ_Kwanjai (KwanjaiTassanee)

KwanjaiTassanee

Geek Repo

Location:Phuket

Github PK Tool:Github PK Tool

KJ_Kwanjai's starred repositories

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Language:PythonLicense:NOASSERTIONStargazers:80246Issues:1731Issues:43064

darts

A python library for user-friendly forecasting and anomaly detection on time series.

Language:PythonLicense:Apache-2.0Stargazers:7569Issues:60Issues:1466

t81_558_deep_learning

T81-558: Keras - Applications of Deep Neural Networks @Washington University in St. Louis

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:5688Issues:323Issues:123

python_for_microscopists

https://www.youtube.com/channel/UC34rW-HtPJulxr5wp2Xa04w?sub_confirmation=1

Language:Jupyter NotebookLicense:MITStargazers:3755Issues:108Issues:72

machine-learning-articles

🧠💬 Articles I wrote about machine learning, archived from MachineCurve.com.

flow-forecast

Deep learning PyTorch library for time series forecasting, classification, and anomaly detection (originally for flood forecasting).

Language:PythonLicense:GPL-3.0Stargazers:1957Issues:29Issues:198

transformer-time-series-prediction

proof of concept for a transformer-based time series prediction model

Language:PythonLicense:MITStargazers:1212Issues:12Issues:24

transformer

Implementation of Transformer model (originally from Attention is All You Need) applied to Time Series.

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:830Issues:15Issues:58

attention-is-all-you-need-keras

A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need

multigraph_transformer

IEEE TNNLS 2021, transformer, multi-graph transformer, graph, graph classification, sketch recognition, sketch classification, free-hand sketch, official code of the paper "Multi-Graph Transformer for Free-Hand Sketch Recognition"

Language:PythonLicense:MITStargazers:291Issues:7Issues:5

ConvTransformerTimeSeries

Convolutional Transformer for time series

transfer_learning_music

Transfer learning for music classification and regression tasks

Language:Jupyter NotebookStargazers:255Issues:13Issues:10

ML-assignments

about Regression, Classification, CNN, RNN, Explainable AI, Adversarial Attack, Network Compression, Seq2Seq, GAN, Transfer Learning, Meta Learning, Life-long Learning, Reforcement Learning.

Language:Jupyter NotebookStargazers:215Issues:6Issues:1

Seq2SeqSharp

Seq2SeqSharp is a tensor based fast & flexible deep neural network framework written by .NET (C#). It has many highlighted features, such as automatic differentiation, different network types (Transformer, LSTM, BiLSTM and so on), multi-GPUs supported, cross-platforms (Windows, Linux, x86, x64, ARM), multimodal model for text and images and so on.

Language:C#License:NOASSERTIONStargazers:193Issues:23Issues:58

Coursera_Deep_Learning_Specialization

Implementation of Logistic Regression, MLP, CNN, RNN & LSTM from scratch in python. Training of deep learning models for image classification, object detection, and sequence processing (including transformers implementation) in TensorFlow.

Language:Jupyter NotebookStargazers:90Issues:2Issues:5

basic-dataset

a collection of Dataset from various sources

Language:HTMLStargazers:90Issues:5Issues:0
Language:Jupyter NotebookLicense:Apache-2.0Stargazers:49Issues:1Issues:1

Deep-Learning-Algorithms

CNN, LSTM, RNN, GRU, DNN, BERT, Transformer, ULMFiT

Language:Jupyter NotebookStargazers:29Issues:0Issues:0

mousecam

A head-mounted camera system integrates detailed behavioral monitoring with multichannel electrophysiology in freely moving mice

Language:PythonLicense:GPL-3.0Stargazers:25Issues:7Issues:3

Speech-Emotion-Analysis

Human emotions are one of the strongest ways of communication. Even if a person doesn’t understand a language, he or she can very well understand the emotions delivered by an individual. In other words, emotions are universal.The idea behind the project is to develop a Speech Emotion Analyzer using deep-learning to correctly classify a human’s different emotions, such as, neutral speech, angry speech, surprised speech, etc. We have deployed three different network architectures namely 1-D CNN, LSTMs and Transformers to carryout the classification task. Also, we have used two different feature extraction methodologies (MFCC & Mel Spectrograms) to capture the features in a given voice signal and compared the two in their ability to produce high quality results, especially in deep-learning models.

Language:Jupyter NotebookStargazers:21Issues:2Issues:0

transformer_soc

Transformer neural network for state of charge estimation in Tensorflow

Language:Jupyter NotebookStargazers:15Issues:3Issues:3

non-coding-DNA-classifier

Deep learning multi-label classifier of non-coding DNA sequences

Language:Jupyter NotebookStargazers:13Issues:0Issues:0

Transfer-learning

Transfer Knowledge Learned from Multiple Domains for Time-series Data Prediction

Language:Jupyter NotebookStargazers:10Issues:0Issues:0

learning-wavelets

Learning wavelet transforms for audio compression

Language:Jupyter NotebookStargazers:8Issues:4Issues:0

accelerometer_data_filtering

using median filter and low pass filter from scipy lib

Language:PythonStargazers:5Issues:0Issues:0

PhysicsAnalysis

Converts a CSV file containing linear acceleration data into a set of line graphs

Language:PythonLicense:MITStargazers:2Issues:3Issues:2

RNN-and-Transformers

Sequence modeling course codes

Language:Jupyter NotebookLicense:MITStargazers:1Issues:1Issues:0