CAMMA-public / SelfSupSurg

Official repository for "Dissecting Self-Supervised Learning Methods for Surgical Computer Vision"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: No rendezvous handler for tcp://

309020726 opened this issue · comments

commented

Hello, I want to run this code on a Windows system, the virtual environment is configured according to the standard you gave. Since I only have one GPU available, I set it to 1 in the config file and subsequently ran "python main.py -hp hparams\cholec80\pre_training\cholec_to_cholec\series_01\h001.yaml -m self_supervised", the following error occurs:

......
--- Logging error ---
Traceback (most recent call last):
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 150, in launch_distributed
_distributed_worker(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 192, in _distributed_worker
run_engine(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\engine_registry.py", line 86, in run_engine
engine.run_engine(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\train.py", line 39, in run_engine
train_main(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\train.py", line 127, in train_main
trainer = SelfSupervisionTrainer(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\trainer\trainer_main.py", line 86, in init
self.setup_distributed(self.cfg.MACHINE.DEVICE == "gpu")
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\trainer\trainer_main.py", line 118, in setup_distributed
torch.distributed.init_process_group(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\torch\distributed\distributed_c10d.py", line 433, in init_process_group
rendezvous_iterator = rendezvous(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\torch\distributed\rendezvous.py", line 82, in rendezvous
raise RuntimeError("No rendezvous handler for {}://".format(result.scheme))
RuntimeError: No rendezvous handler for tcp://

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 1085, in emit
msg = self.format(record)
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 929, in format
return fmt.format(record)
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 668, in format
record.message = record.getMessage()
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 373, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "main.py", line 97, in
hydra_main(overrides=overrides, mode=training_mode)
File "main.py", line 59, in hydra_main
launch_distributed(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 162, in launch_distributed
logging.error("Wrapping up, caught exception: ", e)
Message: 'Wrapping up, caught exception: '
Arguments: (RuntimeError('No rendezvous handler for tcp://'),)
--- Logging error ---
Traceback (most recent call last):
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 150, in launch_distributed
_distributed_worker(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 192, in _distributed_worker
run_engine(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\engine_registry.py", line 86, in run_engine
engine.run_engine(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\train.py", line 39, in run_engine
train_main(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\train.py", line 127, in train_main
trainer = SelfSupervisionTrainer(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\trainer\trainer_main.py", line 86, in init
self.setup_distributed(self.cfg.MACHINE.DEVICE == "gpu")
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\trainer\trainer_main.py", line 118, in setup_distributed
torch.distributed.init_process_group(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\torch\distributed\distributed_c10d.py", line 433, in init_process_group
rendezvous_iterator = rendezvous(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\torch\distributed\rendezvous.py", line 82, in rendezvous
raise RuntimeError("No rendezvous handler for {}://".format(result.scheme))
RuntimeError: No rendezvous handler for tcp://

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 1085, in emit
msg = self.format(record)
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 929, in format
return fmt.format(record)
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 668, in format
record.message = record.getMessage()
File "D:\anaconda\envs\selfsupsurg\lib\logging_init_.py", line 373, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "main.py", line 97, in
hydra_main(overrides=overrides, mode=training_mode)
File "main.py", line 59, in hydra_main
launch_distributed(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 162, in launch_distributed
logging.error("Wrapping up, caught exception: ", e)
Message: 'Wrapping up, caught exception: '
Arguments: (RuntimeError('No rendezvous handler for tcp://'),)
Traceback (most recent call last):
File "main.py", line 97, in
hydra_main(overrides=overrides, mode=training_mode)
File "main.py", line 59, in hydra_main
launch_distributed(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 164, in launch_distributed
raise e
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 150, in launch_distributed
_distributed_worker(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\utils\distributed_launcher.py", line 192, in _distributed_worker
run_engine(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\engine_registry.py", line 86, in run_engine
engine.run_engine(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\train.py", line 39, in run_engine
train_main(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\engines\train.py", line 127, in train_main
trainer = SelfSupervisionTrainer(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\trainer\trainer_main.py", line 86, in init
self.setup_distributed(self.cfg.MACHINE.DEVICE == "gpu")
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\vissl\trainer\trainer_main.py", line 118, in setup_distributed
torch.distributed.init_process_group(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\torch\distributed\distributed_c10d.py", line 433, in init_process_group
rendezvous_iterator = rendezvous(
File "D:\anaconda\envs\selfsupsurg\lib\site-packages\torch\distributed\rendezvous.py", line 82, in rendezvous
raise RuntimeError("No rendezvous handler for {}://".format(result.scheme))
RuntimeError: No rendezvous handler for tcp://

Looking forward to your reply so that I can reproduce the algorithm, thank you!

Hi, the error seems to stem from the improper distributed training configs due to the use of Windows OS. Unfortunealy, we don't have a windows OS here to reproduce the issue. We suggest using a linux OS to use the repo for the training.