There are 1 repository under sequence-modeling topic.
Code Repository for Liquid Time-Constant Networks (LTCs)
Repository for the tutorial on Sequence-Aware Recommender Systems held at TheWebConf 2019 and ACM RecSys 2018
Efficient Python library for Extended LSTM with exponential gating, memory mixing, and matrix memory for superior sequence modeling.
Implementation of GateLoop Transformer in Pytorch and Jax
Pytorch implementation of Simplified Structured State-Spaces for Sequence Modeling (S5)
Contains various architectures and novel paper implementations for Natural Language Processing tasks like Sequence Modelling and Neural Machine Translation.
Sequential model for polyphonic music
The Reinforcement-Learning-Related Papers of ICLR 2019
Repo to reproduce the First-Explore paper results
Python package for Arabic natural language processing
Temporal Neural Networks
A hands-on project for forecasting time-series with PyTorch LSTMs. It creates realistic daily data (trend, seasonality, events, noise), prepares it with sliding windows, and trains an LSTM to make multi-step predictions. The project tracks errors with RMSE, MAE, MAPE and shows clear plots of training progress and forecast results.
Elucidating the Design Choice of Probability Paths in Flow Matching for Forecasting
Official Repository of JustDense: Just using Dense instead of Sequence Mixer for Time Series analysis
The course studies fundamentals of distributed machine learning algorithms and the fundamentals of deep learning. We will cover the basics of machine learning and introduce techniques and systems that enable machine learning algorithms to be efficiently parallelized.
Implementation of Echo State Networks (ESN) with experiments on MNIST and ECG5000. Includes comparison with Linear Regression and analysis of weight initialization methods for time-series and classification tasks.
Minimal and efficient JAX implementation of the Mamba State Space Model in JAX/Flax. Inspired by 'Mamba: Linear-Time Sequence Modeling with Selective State Spaces,' this repo provides fast, scalable, and well-documented state-of-the-art sequence modeling tools.
Practical implementation of Long Short-Term Memory (LSTM) networks for time series modeling and sequence prediction. Includes step-by-step notebooks, model training, evaluation, and real-world case studies.
An unofficial implementation of "TransAct: Transformer-based Realtime User Action Model for Recommendation at Pinterest" in Tensorflow
Audio and Music Synthesis with Machine Learning
Human Activity Recognition using Deep Learning on Spatio-Temporal Graphs
Deep, sequential, transductive divergence metric and domain adaptation for time-series classifiers
Tensorflow implementation of Long Short-Term Memory model for audio synthesis used for thesis
Implements and benchmarks optimal demonstration selection strategies for In-Context Learning (ICL) using LLMs. Covers IDS, RDES, Influence-based Selection, Se², and TopK+ConE across reasoning and classification tasks, analyzing the impact of example relevance, diversity, and ordering on model performance across multiple architectures.
Modular spectral transformer implementations in PyTorch with Fourier, wavelet, and other frequency-domain operations for efficient sequence modeling
Mastering LSTM Networks — A focused collection of high-impact projects for sequence modeling, time series forecasting, and pattern recognition using robust, well-documented, and production-grade code.
A simple RNN-based language model for Armenian text. This project includes preprocessing Armenian text data, tokenization, vocabulary building, sequence generation, model training using PyTorch, and Armenian text generation.
Scrapes and preprocesses raw text data, then trains an LSTM language model to predict and generate text sequences. Includes tokenization, vocabulary mapping, training with PyTorch, and text generation from a seed phrase for coherent output.
This repository implements a GPU-accelerated next-word prediction model using PyTorch and LSTM. It includes data preprocessing with NLTK, vocabulary creation, training on tokenized text, and generating text predictions, starting from a given input phrase.
An AI model that learns and composes music in the style of Chopin using LSTM-based neural networks.
Pytorch implementation of Transformer of the paper Attention Is All You Need
Loto6 lottery number prediction with BiLSTM + Monte Carlo Dropout (educational use)