hankkkwu / SegFormer-pytorch

Implementation of SegFormer in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Building the blocks of Segformer architecture.

  1. Overlap Patch Embedding. A method to convert images to sequence of overlapping patches
  2. Efficient Self-Attention - 1st Core component of all Transformer based models.
  3. Mix-FeedForward module - 2nd core component of Transformer models. Along with Self-Attention, forms single Transformer block
  4. Transformer block - Self-attention + Mix FFN + Layer Norm forms a basic Tranformer block5.
  5. Decoder head - contains MLP layers.

Here is the result trained on BDD100k drivable area: highway-seg

Here is the attention maps from the video above: highway-attn

About

Implementation of SegFormer in PyTorch


Languages

Language:Jupyter Notebook 96.0%Language:Python 4.0%