There are 4 repositories under positional-encoding topic.
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.
Cameras as Relative Positional Encoding
PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces (CVPR 2023)
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
[CVPR 2021] Adversarial Generation of Continuous Images
[CVPR 2023] This is the official PyTorch implementation for "Dynamic Focus-aware Positional Queries for Semantic Segmentation".
Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding
Trading Positional Complexity vs Deepness in Coordinate Networks
"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang.
Developed the ViViT model for medical video classification, enhancing 3D organ image analysis using transformer-based architectures.
Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Long-Range and Hierarchical Structures
Context-aware Biases for Length Extrapolation
This repository offers a comprehensive overview and quantitative benchmarking of positional encoding methods in transformer-based time series models.
PyTorch implementation of "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
🧮 Algebraic Positional Encodings.
完整的原版transformer程序,complete origin transformer program
A clean, ground-up implementation of the Transformer architecture in PyTorch, including positional encoding, multi-head attention, encoder-decoder layers, and masking. Great for learning or building upon the core model.
[ICML'25] "Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding" by Jiajun Zhu, Peihao Wang, Ruisi Cai, Jason D. Lee, Pan Li, Zhangyang Wang
Unofficial pytorch implementation of the paper "Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding", NeurIPS 2021.
Implementation of Rotary Embeddings, from the Roformer paper, in Tensorflow
The implementation of transformer as presented in the paper "Attention is all you need" from scratch.
本仓库定位为 AI论文复现 / 从零实现 Transformer。 代码遵循原论文的模块划分,包含位置编码、多头注意力、前馈网络、编码器‑解码器等全部组件,并附带详细的中文拆解文档与英文注释,方便学习与二次开发。
Benchmarking Positional Encodings for GNNs and Graph Transformers
Application for training an autoencoder for generating an encoder that can be used as feature extractor for dimensionality and noise reduction, while the decoder can be used for synthetic data generation. Supports dynamic plugin integration, allowing users to extend its capabilities by adding custom encoder and decoder models.
Official code for NeurIPS 2023 paper "Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding".
Positional Encoding meets Persistent Homology on Graphs
Crate for `Embedings` and `Positional Encoding` (Rust) (Q2:2025)
Teaching transformer-based architectures
Codebase of paper "Balancing structure and position information in Graph Transformer network with a learnable node embedding"
Code for "The Locality and Symmetry of Positional Encodings" EMNLP Findings
This repository provides a complete workflow for text processing using Hugging Face Transformers and NLTK. It includes modules for sentence normalization, spelling correction, word embedding generation, positional encoding computation, and English-to-French translation
Official code for NeurIPS 2023 paper "Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding".
PyTorch implementation of Rotary Spatial Embeddings