likesum / bpn

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

There is a big difference between the flops of network and the paper

tjussh opened this issue · comments

Hello, this paper is excellent work. But I have a problem with the flops of network. I count with my own code is vastly different from what is described in the paper. Is this reasonable? or is it a matter of my code statistics?
resolution: 1024x768,K=15,B=90, Burst=8, grasacale. The flops of my code output is as follows:

==================Model Analysis Report======================
Doc:
op: The nodes are operation kernel type, such as MatMul, Conv2D. Graph nodes belonging to the same type are aggregated together.
flops: Number of float operations. Note: Please read the implementation for the math behind it.

Profile:
node name | # float_ops
Conv2D 1286.04b float_ops (100.00%, 99.91%)
BiasAdd 640.54m float_ops (0.09%, 0.05%)
Softmax 354.70m float_ops (0.04%, 0.03%)
MaxPool 97.52m float_ops (0.01%, 0.01%)
Mean 47.97m float_ops (0.00%, 0.00%)
Mul 26 float_ops (0.00%, 0.00%)
Sub 2 float_ops (0.00%, 0.00%)
======================End of Report==========================
The FLOPs is:1287180783100

The flops in the paper are 29.9 GB. Is that correct?

the code to calculate the flops as follows:
def get_flops(model):
concrete = tf.function(lambda inputs: model(inputs))
concrete_func = concrete.get_concrete_function(
[tf.TensorSpec([1, *inputs.shape[1:]]) for inputs in model.inputs])
frozen_func, graph_def = convert_variables_to_constants_v2_as_graph(concrete_func)
with tf.Graph().as_default() as graph:
tf.graph_util.import_graph_def(graph_def, name='')
run_meta = tf.compat.v1.RunMetadata()
opts = tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()
flops = tf.compat.v1.profiler.profile(graph=graph, run_meta=run_meta, cmd="op", options=opts)
return flops.total_float_ops