google / XNNPACK

High-efficiency floating-point neural network inference operators for mobile, server, and Web

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

convolution with nchw format

fwz-fpga opened this issue · comments

// Supported cases:
// + 1x1 convolution (no groups)
// + 3x3 stride-2 with 3 input channels and NHWC input layout
// + 3x3 stride-2 depthwise convolution with horizontal padding 1 & no vertical padding
// + 3x3 stride-1 depthwise convolution with horizontal padding 1 & no vertical padding
// + 5x5 stride-2 depthwise convolution with horizontal padding 2 & no vertical padding
// + 5x5 stride-1 depthwise convolution with horizontal padding 2 & no vertical padding

Does it mean that xnnpack support convolution with nchw format with these cases? Will xnnpack support more nchw format optimized kernels?
Onnx model export from pytorch default with nchw format, so if use xnnpack to run onnx model , it will do lots' of extra things(code) with format change. Such as concat\slice\gather..
Any suggestions?

XNNPACK supports only a very limited set of NCHW operators for sparse inference. See here for details. There are no plans for extending NCHW operator support.