Running SyntaxNet with designated instance (in Python-level)
j-min opened this issue · comments
- I posted this question at stackoverflow but didn't get a good answer yet.
Could you please let me know how I designate which instance to use when training/testing SyntaxNet?
In other tensorflow models we can easily change configurations by editing Python code:
ex) tf.device('/cpu:0')
=> tf.device('/gpu:0')
.
I could run parsey mcparseface model via running demo.sh
and I followed back symbolic links to find device configurations.
Maybe I missed something. But I cannot find gpu configuration python codes in demo.sh
, parser_eval.py
and context.proto
.
When I search with query 'device
' in tensorflow/models, I could see several C files such as syntaxnet/syntaxnet/unpack_sparse_features.cc contain line using tensorflow::DEVICE_CPU;
So.. is to change C codes in these files the only way to change device configuration for SyntaxNet?
I hope there is a simpler way to change the setting in Python level.
Thanks in advance.
i thought the GPU configurations was coded inside the C++ codes of the Syntaxnet. thus, i guess it is out of control via python code.
I see. I read that you had trained SyntaxNet with GPU in other issue of this repo. Unfortunately I'm not familiar with tensorflow C++ API. When we train SyntaxNet with GPU, do we have to change codes other than using tensorflow::DEVICE_CPU;
-> using tensorflow::DEVICE_GPU;
?
@j-min
i am not sure. but training a model using GPU is default behavior by the Tensorflow libraries because the training time consumed shorter than CPU version.
so i guess that the libraries linked with parser_eval.py
does not use GPU when we evaluate the model.