lucasdavid / resnest-tf

ResNeSt: Split-Attention Networks for Tensorflow2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ResNeSt: Split-Attention Networks

This is an implementation of "ResNeSt: Split-Attention Networks" on Keras and Tensorflow using native Convolution groups and weights.

The implementation is based on Hyungjin Kim's TF implementation, which in turn references on the official Torch implementation.

This implementation requires Tensorflow 2.9, as Conv2D#groups are used.

Pretrained Weights

Weights were ported from the Torch implementation using the conversion scripts in tools. All networks were pre-trained over imagenet.

Usage

import tensorflow as tf
from resnest import ResNeSt101

# For classification (dilation=1):
model = resnest.ResNeSt101(
  input_shape=[512, 512, 3],
  weights='imagenet'
)

# For segmentation (dilation in (2, 4)):
model = resnest.ResNeSt101(
  input_shape=[512, 512, 3],
  weights='imagenet',
  include_top=False,
  pooling=None,
  dilation=4
)

Preprocessing

Data must be preprocessed with tf.keras.applications.imagenet_utils.preprocess_input(x, mode='torch'). In other words:

from keras.applications.imagenet_utils import preprocess_input

x = load_data()
x = preprocess_input(x, mode='torch')

# Or...
x /= 255
x -= tf.convert_to_tensor([0.485, 0.456, 0.406])
x /= tf.convert_to_tensor([0.229, 0.224, 0.225])

About

ResNeSt: Split-Attention Networks for Tensorflow2

License:MIT License


Languages

Language:Python 100.0%