google-deepmind / dnc

A TensorFlow implementation of the Differentiable Neural Computer.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to improve the use of cpu

shushan2017 opened this issue · comments

I run the dnc example and found that in 32-core machines, the CPU usage is not high. Is there any way to make the CPU work at full capacity and improve efficiency? Thank you

I would recommend data-parallelism versus model parallelism here, see https://www.tensorflow.org/deploy/distributed#replicated_training. The general idea is that you want to split your input training batch across multiple groups of CPUs and have several instantiations of the DNC train on these partitions, instead of hoping the model can parallelize very well over 32 machines for a single minibatch.