ratikumari227 / Convolution_Variants

Reimplementing SOTA convolution variants with Tensorflow 2.0.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Convolution Variants

This repository replicates various convolution layers from SOTA papers.

This repository currently includes:

Attention Augmented Convolution Layer

AA Convolution Diagram

For other implementations in:

Notes

  • This implementation does not yet include relative positional encodings.

Mixed Depthwise Convolution Layer

Mix Conv Diagram

For other implementations in:

Notes

  • This implementation combines depthwise convolution with pointwise convolution. The original implementation only used depthwise convolutions.

Drop Block

Drop Block

For other implementations in:

Efficient Channel Attention Layer

ECA

For other implementations in:

Convolutional Block Attention Module Layer

CBAM

For other implementations in:

Usage

Here is an example of how to use one of the layers:

import tensorflow as tf
from convVariants import AAConv

aaConv = AAConv(
    channels_out=32,
    kernel_size=3,
    depth_k=8, 
    depth_v=8, 
    num_heads=4)

The layer can be treated like any other tf.keras.layers class.

model = tf.keras.models.Sequential([
    aaConv,
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10, activation='softmax')
    ])

model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)

Tests

Test cases are located here.

To run tests:

cd Convolution_Variants
python tests.py

Requirements

  • tensorflow 2.0.0 with GPU

Caveats

  • These layers are only tested to work for input format: NCHW.

Acknowledgements

Links to the original papers:

About

Reimplementing SOTA convolution variants with Tensorflow 2.0.


Languages

Language:Python 100.0%