Memory leak issue
GraceBoston opened this issue · comments
tensorflow-gpu 1.3.0
tensorflow-tensorboard 0.1.8
Keras 2.0.6
Keras-Applications 1.0.6
I downloaded model_zoo link and successfully running 2 models ( ssd_mobilenet_v1_coco, ssd_inception_v2_coco) without any problems.But when i try faster_rcnn_inception_resnet_v2_atrous_coco, and rfcn_resnet101_coco models, models start working properly, but it consumes almost 62.8G/62.8G memory, so I couldn't run it. Have you ever got this issues?
I have 2394*3062 png image files for training, so i resizing the image as 600 * 767(config file).
i also using 'os.environ['CUDA_VISIBLE_DEVICES'] = '1'' and it works well.
Total memory: 10.91GiB
Free memory: 10.75GiB
2018-10-23 10:55:29.290302: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2018-10-23 10:55:29.290311: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y
2018-10-23 10:55:29.290323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0)
2018-10-23 10:55:31.938505: I tensorflow/core/common_runtime/simple_placer.cc:697] Ignoring device specification /device:GPU:0 for node 'prefetch_queue_Dequeue' because the input edge from 'prefetch_queue' is a reference connection and already has a device field set to /device:CPU:0
INFO:tensorflow:Restoring parameters from model.ckpt
INFO:tensorflow:Starting Session.
INFO:tensorflow:Saving checkpoint to path train/model.ckpt
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global_step/sec: 0
INFO:tensorflow:Recording summary at step 0.
INFO:tensorflow:global step 1: loss = 1.5886 (24.888 sec/step)
INFO:tensorflow:global step 2: loss = 1.4227 (0.812 sec/step)
INFO:tensorflow:global step 3: loss = 1.2016 (0.875 sec/step)
model {
faster_rcnn {
num_classes: 1
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 767
}
}
feature_extractor {
type: 'faster_rcnn_inception_resnet_v2'
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 8
width_stride: 8
}
}
first_stage_atrous_rate: 2
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 17
maxpool_kernel_size: 1
maxpool_stride: 1
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 0
learning_rate: .0003
}
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "train.record"
}
label_map_path: "annotations/label_map.pbtxt"
}
eval_config: {
num_examples: 170
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "val.record"
}
label_map_path: "annotations/label_map.pbtxt"
shuffle: false
num_readers: 1
}
Hi @GraceBoston, this repo relies on a dated version of the TensorFlow api. We've moved to a more future proof version here: https://github.com/cloud-annotations/training
I encourage you to try it out and reopen this issue there if you are still running into problems