ai4co / rl4co

A PyTorch library for all things Reinforcement Learning (RL) for Combinatorial Optimization (CO)

Home Page:https://rl4.co

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG]When I run sdvrp example,it raise error

kuoiii opened this issue · comments

commented

Hello, I find a bug when i want to run the sdvrp problem, here is my settings below
config = DictConfig(
{"data": {
"train_size": 10000,
"val_size": 100,
"test_size": 2,
"batch_size": 50,
"generate_data": True
},
"optimizer": {"lr": 1e-4},
"path": {"data_dir": "data111/"},
}
)
# Environment, Model, and Lightning Module
sdvrpenv = SDVRPEnv(num_loc=20,
min_loc=0,
max_loc=1,
min_demand=1,
max_demand=10,
vehicle_capacity=1.0,
capacity=1.0,
# train_file="tsp/tsp20_test_seed1234.npz",
# val_file="tsp/tsp20_test_seed1234.npz",
# test_file="tsp/tsp20_test_seed1234.npz",
seed=None,
device="cuda",
)
model = AttentionModel(sdvrpenv)
lit_module = RL4COLitModule(config, sdvrpenv, model)
# Trainer
trainer = L.Trainer(
max_epochs=3, # only few epochs
accelerator="gpu", # use GPU if available, else you can use others as "cpu"
logger=None, # can replace with WandbLogger, TensorBoardLogger, etc.
precision="16-mixed", # Lightning will handle faster training with mixed precision
gradient_clip_val=1.0, # clip gradients to avoid exploding gradients
reload_dataloaders_every_n_epochs=1, # necessary for sampling new data
)

trainer.fit(lit_module)   
trainer.test(lit_module)

Then it raise error : glimpse_k = cached.glimpse_key + glimpse_key_dynamic
RuntimeError: The size of tensor a (21) must match the size of tensor b (20) at non-singleton dimension 1
it looks like you haven't complete the sdvrp_env? Because when i run the older version, i found that you haven't complete this env.
Thanks for your reading.

Hi! Thanks for your report.
It turns out, there was a small bug in the dynamic embedding, which is now fixed. We also updated our notebook and we included an example with SDVRP here, which is the same as quickstart but with the SDVRP environment as an example. You may also modify it and try other environments :)

We pushed some fixes including the above to the latest release, so please make sure you run pip install --upgrade rl4co !

Closing since answered - feel free to re-open should you have any further issues ;)