google / seq2seq

A general-purpose encoder-decoder framework for Tensorflow

Home Page:https://google.github.io/seq2seq/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPU allow growth configuration problem

hkhpub opened this issue · comments

commented

I added --gpu_allow_growth True, but the program still allocates all the available memory.
The following code should be a temporal workaround.

  session_config = tf.ConfigProto()
  session_config.gpu_options.allow_growth = FLAGS.gpu_allow_growth
  session_config.gpu_options.per_process_gpu_memory_fraction = FLAGS.gpu_memory_fraction
  config = run_config.RunConfig(
      tf_random_seed=FLAGS.tf_random_seed,
      save_checkpoints_secs=FLAGS.save_checkpoints_secs,
      save_checkpoints_steps=FLAGS.save_checkpoints_steps,
      keep_checkpoint_max=FLAGS.keep_checkpoint_max,
      keep_checkpoint_every_n_hours=FLAGS.keep_checkpoint_every_n_hours,
      session_config=session_config)
  config.tf_config.log_device_placement = FLAGS.log_device_placement

I open this issue for those who face similar problem like me.