wayveai / Driving-with-LLMs

PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

View Results

PzWHU opened this issue · comments

thanks for your great work!
I want to view the results after evaluation, where to find the WandB project "llm-driver"?

Hi, thanks for your interest in our work! You need to set your WandB API as an environment variable as mentioned here. And after running the evaluation, you can find the "llm-driver" in under your WandB project.

Hi I have the same question and Thanks for your answer! I have run the following command successfully:

export WANDB_API_KEY=12345abcde...
python train.py --mode eval --resume_from_checkpoint models/weights/stage2_with_pretrained/ --data_path data/vqa_train_10k.pkl --val_data_path data/vqa_test_1k.pkl --eval_items caption,action --vqa

It shows that both local and wandb results/logs are created successfully:

Run summary:
wandb:               car_error 0.06162
wandb:       control_error_lat 0.01453
wandb:       control_error_lon 0.06686
wandb:               eval/loss 0.50115
wandb:            eval/runtime 19423.4723
wandb: eval/samples_per_second 0.102
wandb:   eval/steps_per_second 0.013
wandb:               ped_error 0.31515
wandb:             tl_accuracy 0.71487
wandb:             tl_distance 6.70763
wandb:       train/global_step 0
wandb: 
wandb: 🚀 View run jolly-lake-1 at: https://wandb.ai/minounou/llm-driver/runs/hsbkmjlk
wandb: Synced 6 W&B file(s), 2 media file(s), 3 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20240505_211705-hsbkmjlk/logs

could you let me know how to generate the following kind of output (video)? Thanks!
https://github.com/wayveai/Driving-with-LLMs/blob/main/assets/main.gif

Hi, i also have some similar questions about how can i visualize the picture front the ego car and the image in the bev to visualize the trajectory of ego car?

Hi, I have the same question.