Error while running Flask server
kemo993 opened this issue · comments
Kemal Korjenic commented
Hi,
I'm trying to run Flask server with a model trained from scratch (python online/server.py -name new_run
), but got the following error:
WARNING: Logging before flag parsing goes to stderr.
W0624 21:49:30.760367 139833379484992 doc2vec.py:75] Slow version of gensim.models.doc2vec is being used
Loading RESIDE model.
2019-06-24 21:51:47,860 - [INFO] - {'dataset': 'riedel', 'gpu': '0', 'wGate': True, 'lstm_dim': 192, 'port': 3535, 'pos_dim': 16, 'type_dim': 50, 'alias_dim': 32, 'de_gcn_dim': 16, 'max_pos': 60, 'de_layers': 1, 'dropout': 0.8, 'rec_dropout': 0.8, 'lr': 0.001, 'l2': 0.001, 'max_epochs': 2, 'batch_size': 32, 'chunk_size': 1000, 'restore': False, 'only_eval': False, 'opt': 'sgd', 'eps': 1e-08, 'name': 'new_run', 'seed': 1234, 'log_dir': './log/', 'config_dir': './config/', 'embed_loc': './glove/glove.6B.50d_word2vec.txt', 'embed_dim': 50, 'rel2alias_file': './side_info/relation_alias/riedel/relation_alias_from_wikidata_ppdb_extended.json', 'type2id_file': './side_info/entity_type/riedel/type_info.json'}
{'alias_dim': 32,
'batch_size': 32,
'chunk_size': 1000,
'config_dir': './config/',
'dataset': 'riedel',
'de_gcn_dim': 16,
'de_layers': 1,
'dropout': 0.8,
'embed_dim': 50,
'embed_loc': './glove/glove.6B.50d_word2vec.txt',
'eps': 1e-08,
'gpu': '0',
'l2': 0.001,
'log_dir': './log/',
'lr': 0.001,
'lstm_dim': 192,
'max_epochs': 2,
'max_pos': 60,
'name': 'new_run',
'only_eval': False,
'opt': 'sgd',
'port': 3535,
'pos_dim': 16,
'rec_dropout': 0.8,
'rel2alias_file': './side_info/relation_alias/riedel/relation_alias_from_wikidata_ppdb_extended.json',
'restore': False,
'seed': 1234,
'type2id_file': './side_info/entity_type/riedel/type_info.json',
'type_dim': 50,
'wGate': True}
/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/smart_open/smart_open_lib.py:398: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function
'See the migration notes for details: %s' % _MIGRATION_NOTES_URL
2019-06-24 21:53:10.510101: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2019-06-24 21:53:10.538389: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2500000000 Hz
2019-06-24 21:53:10.538861: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x11aedfe20 executing computations on platform Host. Devices:
2019-06-24 21:53:10.538891: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
2019-06-24 21:53:11.875817: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
2019-06-24 21:53:12.459249: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key Bi-LSTM/bidirectional_rnn/bw/BW_GRU/candidate/bias not found in checkpoint
Traceback (most recent call last):
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
return fn(*args)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: Key Bi-LSTM/bidirectional_rnn/bw/BW_GRU/candidate/bias not found in checkpoint
[[{{node save/RestoreV2}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1286, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 950, in run
run_metadata_ptr)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
run_metadata)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key Bi-LSTM/bidirectional_rnn/bw/BW_GRU/candidate/bias not found in checkpoint
[[node save/RestoreV2 (defined at online/server.py:77) ]]
Original stack trace for 'save/RestoreV2':
File "online/server.py", line 77, in <module>
saver = tf.train.Saver()
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 825, in __init__
self.build()
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 837, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 875, in _build
build_restore=build_restore)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 508, in _build_internal
restore_sequentially, reshape)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 328, in _AddRestoreOps
restore_sequentially)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 575, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1696, in restore_v2
name=name)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1296, in restore
names_to_keys = object_graph_key_mapping(save_path)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1614, in object_graph_key_mapping
object_graph_string = reader.get_tensor(trackable.OBJECT_GRAPH_PROTO_KEY)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 678, in get_tensor
return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str))
tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "online/server.py", line 83, in <module>
saver.restore(sess, save_path)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1302, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key Bi-LSTM/bidirectional_rnn/bw/BW_GRU/candidate/bias not found in checkpoint
[[node save/RestoreV2 (defined at online/server.py:77) ]]
Original stack trace for 'save/RestoreV2':
File "online/server.py", line 77, in <module>
saver = tf.train.Saver()
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 825, in __init__
self.build()
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 837, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 875, in _build
build_restore=build_restore)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 508, in _build_internal
restore_sequentially, reshape)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 328, in _AddRestoreOps
restore_sequentially)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 575, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1696, in restore_v2
name=name)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/home/ec2-user/virtualenvs/RESIDE/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
Thanks!
Shikhar Vashishth commented
Hi @kemo993,
I am sorry for the problem. Please pull the changes and try again.
Kemal Korjenic commented
Thanks a lot, @svjan5! Will try again and let you know.