idreesshaikh / Autonomous-Driving-in-Carla-using-Deep-Reinforcement-Learning

Deep Reinforcement Learning (PPO) in Autonomous Driving (Carla) [from scratch]

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Couldn't inport Carla egg properly

longziyu opened this issue · comments

I tried to run this code on a Linux system, and replaced the corresponding libraries (such as torch, torchvision and tensorboard) with the Linux version in the project setup phase. It passed the compilation during the poetry update, but an error occurred when running the continuous_driver.py file:

(venv) ......$ python continuous_driver.py --exp-name=Ppo --train=False
Couldn't inport Carla egg properly
pygame 2.1.2(SDL 2.0.16,Python 3.7.13)
Hello fron the pygane coRnunity. https://www. pygane.org/contribute.html
Failed to nake a connection with the server: module 'carla' has no attribute 'client'
ERROR : root:connection has been refused by the server.
Exit
Traceback (most recent call last):
File "continuous_driver.py", line 297, in <module>runner()
File "continuous_driver.py" , line 115, in runner
env = CarlaEnvironment(client, world,town,checkpoint_frequency=None)UnboundLocalError: local variable 'client' referenced before assignnent

I have opened CarlaUE4Editor and started running before running the code. I opened the carla folder of the code and found only the carla-0.9.8-py3.7-win-amd64.egg file. This is not compatible with the Linux system. How can I make the code run correctly on the Linux system?

Hi, yes the egg file which is present in this repository is only for Windows. You have to download the server (Simulator: https://github.com/carla-simulator/carla/releases) for Linux system. Then you've to replace the egg file present in this repository's ~/carla/ directory with the one from Simulator (which you just downloaded) present in ~/PythonAPI/carla/dist/. I hope that helps.

Secondly, be mindful that I'm using Carla 0.9.08. And any version lower than this might be incompatible.

Also make sure ppo is lowercase e.g. --exp-name=ppo.

Thank you for your answer to my first question. After I switched the system to Windows, new problems appeared.

(venv) C: \Autonomous -Driving -in-Carla-using -Deep -Reinforcement -Learning -main Autonomous -Driving in-Carla-using -Deep-Reinforcement -Learning -main>python continuous driver.py --exp-name-ppo --train-False
pygame 2.1.2 (SDL 2.0. 18, Python 3.7.16)
Hello from the pygame community. https:/ /www. pygame. org/ contribute. html
WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API
WARNING: Client API version= e32e6bff
WARNING: Simulator API version = 0.9.14
Failed to make a connection with the server: time-out of 20000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000
Client version: e32e6bff
Server version: 0.9. 14
There is a Client and Server version mismatch! Please install or download the right versions.

There are two questions.First about the API version, the Carla version I used is 0.9.14, but it does not match the API version in your code (is your client API version=e32e6bff a hash value?)
Then there is the problem of port connection timeout. I'm sure when I start Carla, my port 2000 has been started and is in the listening state.

......>netstat -aon | findstr 2000
TCP    0.0.0.0:2000    0.0.0.0:0    LISTENING    12480
......>netstat -aon | findstr" 2001 ”
TCP    0.0. 0.0:2001   0.0.0.0:0    LISTENING    12480
......> tasklist | findstr 12480
CarlaUE4-Win64- Shipping. exe    12480 Console    1   1,735,312K

Could you please give me some suggestions to solve these problems?

Thank you for your answer to my first question. After I switched the system to Windows, new problems appeared.


(venv) C: \Autonomous -Driving -in-Carla-using -Deep -Reinforcement -Learning -main Autonomous -Driving in-Carla-using -Deep-Reinforcement -Learning -main>python continuous driver.py --exp-name-ppo --train-False

pygame 2.1.2 (SDL 2.0. 18, Python 3.7.16)

Hello from the pygame community. https:/ /www. pygame. org/ contribute. html

WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API

WARNING: Client API version= e32e6bff

WARNING: Simulator API version = 0.9.14

Failed to make a connection with the server: time-out of 20000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000

Client version: e32e6bff

Server version: 0.9. 14

There is a Client and Server version mismatch! Please install or download the right versions.

There are two questions.First about the API version, the Carla version I used is 0.9.14, but it does not match the API version in your code (is your client API version=e32e6bff a hash value?)

Then there is the problem of port connection timeout. I'm sure when I start Carla, my port 2000 has been started and is in the listening state.


......>netstat -aon | findstr 2000

TCP    0.0.0.0:2000    0.0.0.0:0    LISTENING    12480

......>netstat -aon | findstr" 2001 ”

TCP    0.0. 0.0:2001   0.0.0.0:0    LISTENING    12480

......> tasklist | findstr 12480

CarlaUE4-Win64- Shipping. exe    12480 Console    1   1,735,312K

Could you please give me some suggestions to solve these problems?

It's exactly the same problem. The server (simulator) you downloaded has its own egg in it. You should replace the egg file in this Repository ~/carla/ with the egg file from your downloaded server as mentioned previously.

If you don't understand anything I said above then try to run Carla Sever 0.9.08 instead of 0.9.14. Or try to read Carla's documentation so you can understand how things work 😊

Thank you a lot. Now I can successfully run your code. I am a beginner in Carla and RL field. When I try to run the code, both python continuous_driver.py --exp-name ppo --train False and python continuous_driver.py --exp-name ppo will start learning again. The difference is that the absolute value of average awards of the former is smaller from the beginning.

No matter what kind of command, the duration of each episode is about 20 timesteps, which is so short that every time the Pygame interface shows that when the vehicle is just generated at the birth point, the next episode starts and interrupted it. Every episode is a repetition of the vehicle at the birth point. I have run more than three thousand episodes, which is still the cases. Is this situation normal? I see that the vehicle shown in your .readme file can automatically drive a long distance. Could you please tell me how can I do this?

Thank you a lot. Now I can successfully run your code. I am a beginner in Carla and RL field. When I try to run the code, both python continuous_driver.py --exp-name ppo --train False and python continuous_driver.py --exp-name ppo will start learning again. The difference is that the absolute value of average awards of the former is smaller from the beginning.

No matter what kind of command, the duration of each episode is about 20 timesteps, which is so short that every time the Pygame interface shows that when the vehicle is just generated at the birth point, the next episode starts and interrupted it. Every episode is a repetition of the vehicle at the birth point. I have run more than three thousand episodes, which is still the cases. Is this situation normal? I see that the vehicle shown in your .readme file can automatically drive a long distance. Could you please tell me how can I do this?

I change to Town2 and find that the vehicle can operate normally in this map.
By the way, could you tell me the computational power you use when running this code? I have a GeForce RTX 2080 Ti. Every time I run it, the Pygame Window crashes after less than 40 episodes. However, the program only uses less than 30% of its computing power. I was wondering whether there is a network instability during the TCP connection.

Thank you a lot. Now I can successfully run your code. I am a beginner in Carla and RL field. When I try to run the code, both python continuous_driver.py --exp-name ppo --train False and python continuous_driver.py --exp-name ppo will start learning again. The difference is that the absolute value of average awards of the former is smaller from the beginning.
No matter what kind of command, the duration of each episode is about 20 timesteps, which is so short that every time the Pygame interface shows that when the vehicle is just generated at the birth point, the next episode starts and interrupted it. Every episode is a repetition of the vehicle at the birth point. I have run more than three thousand episodes, which is still the cases. Is this situation normal? I see that the vehicle shown in your .readme file can automatically drive a long distance. Could you please tell me how can I do this?

I change to Town2 and find that the vehicle can operate normally in this map. By the way, could you tell me the computational power you use when running this code? I have a GeForce RTX 2080 Ti. Every time I run it, the Pygame Window crashes after less than 40 episodes. However, the program only uses less than 30% of its computing power. I was wondering whether there is a network instability during the TCP connection.

Hi, I am also studying this code now, can you leave my contact information (WeChat: 17863523116 or others)? We can communicate together and make progress together @longziyu

Thank you a lot. Now I can successfully run your code. I am a beginner in Carla and RL field. When I try to run the code, both python continuous_driver.py --exp-name ppo --train False and python continuous_driver.py --exp-name ppo will start learning again. The difference is that the absolute value of average awards of the former is smaller from the beginning.
No matter what kind of command, the duration of each episode is about 20 timesteps, which is so short that every time the Pygame interface shows that when the vehicle is just generated at the birth point, the next episode starts and interrupted it. Every episode is a repetition of the vehicle at the birth point. I have run more than three thousand episodes, which is still the cases. Is this situation normal? I see that the vehicle shown in your .readme file can automatically drive a long distance. Could you please tell me how can I do this?

I change to Town2 and find that the vehicle can operate normally in this map. By the way, could you tell me the computational power you use when running this code? I have a GeForce RTX 2080 Ti. Every time I run it, the Pygame Window crashes after less than 40 episodes. However, the program only uses less than 30% of its computing power. I was wondering whether there is a network instability during the TCP connection.

Hi, I am also studying this code now, can you leave my contact information (WeChat: 17863523116 or others)? We can communicate together and make progress together @longziyu

Ok, I have sent my WeChat to you through your 163 mailbox. @Liuzy0908

Have fun 💯

Have fun 💯

The pre-trained model is not working by using the command, It seems that the car is training!

Hi @longziyu @idreesshaikh @YoussefAser can please someone tell me how you solve it? I downloaded for Windows the version 0.9.8, i have put the correct egg file inside the carla folder of the repo, and i try to run python continuous_driver.py --exp-name ppo --train False --town Town02
when i donwload carla (with pip install carla==0.9.14), i see:
`python continuous_driver.py --exp-name ppo --train False --town Town02
pygame 2.1.2 (SDL 2.0.18, Python 3.7.0)
Hello from the pygame community. https://www.pygame.org/contribute.html
WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API
WARNING: Client API version = 0.9.14
WARNING: Simulator API version = e32e6bff
Failed to make a connection with the server: rpc::rpc_error during call in function get_sensor_token

Client version: 0.9.14
Server version: e32e6bff

There is a Client and Server version mismatch! Please install or download the right versions.
ERROR:root:Connection has been refused by the server.

Exit
Traceback (most recent call last):
File "continuous_driver.py", line 297, in
runner()
File "continuous_driver.py", line 115, in runner
env = CarlaEnvironment(client, world,town, checkpoint_frequency=None)
UnboundLocalError: local variable 'client' referenced before assignment`

if i don't have it installed i face that issue :
pygame 2.1.2 (SDL 2.0.18, Python 3.7.0)
Hello from the pygame community. https://www.pygame.org/contribute.html
Encoder could not be initialized.

Exit
please i really need some help

Hi @longziyu @idreesshaikh @YoussefAser can please someone tell me how you solve it? I downloaded for Windows the version 0.9.8, i have put the correct egg file inside the carla folder of the repo, and i try to run python continuous_driver.py --exp-name ppo --train False --town Town02 when i donwload carla (with pip install carla==0.9.14), i see: `python continuous_driver.py --exp-name ppo --train False --town Town02 pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API WARNING: Client API version = 0.9.14 WARNING: Simulator API version = e32e6bff Failed to make a connection with the server: rpc::rpc_error during call in function get_sensor_token

Client version: 0.9.14 Server version: e32e6bff

There is a Client and Server version mismatch! Please install or download the right versions. ERROR:root:Connection has been refused by the server.

Exit Traceback (most recent call last): File "continuous_driver.py", line 297, in runner() File "continuous_driver.py", line 115, in runner env = CarlaEnvironment(client, world,town, checkpoint_frequency=None) UnboundLocalError: local variable 'client' referenced before assignment`

if i don't have it installed i face that issue : pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html Encoder could not be initialized.

Exit please i really need some help

@elpidak If you don't need to change the map, I strongly recommend downloading the pre-compiled version of CARLA (After uninstalling the previous version): https://github.com/carla-simulator/carla/releases

Hi @longziyu @idreesshaikh @YoussefAser can please someone tell me how you solve it? I downloaded for Windows the version 0.9.8, i have put the correct egg file inside the carla folder of the repo, and i try to run python continuous_driver.py --exp-name ppo --train False --town Town02 when i donwload carla (with pip install carla==0.9.14), i see: python continuous_driver.py --exp-name ppo --train False --town Town02 pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API WARNING: Client API version = 0.9.14 WARNING: Simulator API version = e32e6bff Failed to make a connection with the server: rpc::rpc_error during call in function get_sensor_token Client version: 0.9.14 Server version: e32e6bff There is a Client and Server version mismatch! Please install or download the right versions. ERROR:root:Connection has been refused by the server. Exit Traceback (most recent call last): File "continuous_driver.py", line 297, in runner() File "continuous_driver.py", line 115, in runner env = CarlaEnvironment(client, world,town, checkpoint_frequency=None) UnboundLocalError: local variable 'client' referenced before assignment
if i don't have it installed i face that issue : pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html Encoder could not be initialized.
Exit please i really need some help

@elpidak If you don't need to change the map, I strongly recommend downloading the pre-compiled version of CARLA (After uninstalling the previous version): https://github.com/carla-simulator/carla/releases

@longziyu
https://carla-releases.s3.eu-west-3.amazonaws.com/Windows/CARLA_0.9.8.zip, i did, from there I donwnloaded the Carla.

Hi @longziyu @idreesshaikh @YoussefAser can please someone tell me how you solve it? I downloaded for Windows the version 0.9.8, i have put the correct egg file inside the carla folder of the repo, and i try to run python continuous_driver.py --exp-name ppo --train False --town Town02 when i donwload carla (with pip install carla==0.9.14), i see: python continuous_driver.py --exp-name ppo --train False --town Town02 pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API WARNING: Client API version = 0.9.14 WARNING: Simulator API version = e32e6bff Failed to make a connection with the server: rpc::rpc_error during call in function get_sensor_token Client version: 0.9.14 Server version: e32e6bff There is a Client and Server version mismatch! Please install or download the right versions. ERROR:root:Connection has been refused by the server. Exit Traceback (most recent call last): File "continuous_driver.py", line 297, in runner() File "continuous_driver.py", line 115, in runner env = CarlaEnvironment(client, world,town, checkpoint_frequency=None) UnboundLocalError: local variable 'client' referenced before assignment
if i don't have it installed i face that issue : pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html Encoder could not be initialized.
Exit please i really need some help

@elpidak If you don't need to change the map, I strongly recommend downloading the pre-compiled version of CARLA (After uninstalling the previous version): https://github.com/carla-simulator/carla/releases

@longziyu https://carla-releases.s3.eu-west-3.amazonaws.com/Windows/CARLA_0.9.8.zip, i did, from there I donwnloaded the Carla.

@elpidak I noticed that your Python API version is 0.9.14, you are supposed to download CARLA with the corresponding version (0.9.14) instead of 0.9.8.

Hi @longziyu @idreesshaikh @YoussefAser can please someone tell me how you solve it? I downloaded for Windows the version 0.9.8, i have put the correct egg file inside the carla folder of the repo, and i try to run python continuous_driver.py --exp-name ppo --train False --town Town02 when i donwload carla (with pip install carla==0.9.14), i see: python continuous_driver.py --exp-name ppo --train False --town Town02 pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API WARNING: Client API version = 0.9.14 WARNING: Simulator API version = e32e6bff Failed to make a connection with the server: rpc::rpc_error during call in function get_sensor_token Client version: 0.9.14 Server version: e32e6bff There is a Client and Server version mismatch! Please install or download the right versions. ERROR:root:Connection has been refused by the server. Exit Traceback (most recent call last): File "continuous_driver.py", line 297, in runner() File "continuous_driver.py", line 115, in runner env = CarlaEnvironment(client, world,town, checkpoint_frequency=None) UnboundLocalError: local variable 'client' referenced before assignment
if i don't have it installed i face that issue : pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html Encoder could not be initialized.
Exit please i really need some help

@elpidak If you don't need to change the map, I strongly recommend downloading the pre-compiled version of CARLA (After uninstalling the previous version): https://github.com/carla-simulator/carla/releases

@longziyu https://carla-releases.s3.eu-west-3.amazonaws.com/Windows/CARLA_0.9.8.zip, i did, from there I donwnloaded the Carla.

@elpidak I noticed that your Python API version is 0.9.14, you are supposed to download CARLA with the corresponding version (0.9.14) instead of 0.9.8.

@longziyu
I uninnstall the version, of that, so now i am trying to run it, i open the exe, i have add to the right folder the additional maps but i have that problem

:\Users\R4A\ThesisCode\Autonomous-Driving-in-Carla-using-Deep-Reinforcement-Learning-main> python continuous_driver.py --exp-name ppo --train False --town Town02
pygame 2.1.2 (SDL 2.0.18, Python 3.7.0)
Hello from the pygame community. https://www.pygame.org/contribute.html
Encoder could not be initialized.

image

Hi @longziyu @idreesshaikh @YoussefAser can please someone tell me how you solve it? I downloaded for Windows the version 0.9.8, i have put the correct egg file inside the carla folder of the repo, and i try to run python continuous_driver.py --exp-name ppo --train False --town Town02 when i donwload carla (with pip install carla==0.9.14), i see: python continuous_driver.py --exp-name ppo --train False --town Town02 pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API WARNING: Client API version = 0.9.14 WARNING: Simulator API version = e32e6bff Failed to make a connection with the server: rpc::rpc_error during call in function get_sensor_token Client version: 0.9.14 Server version: e32e6bff There is a Client and Server version mismatch! Please install or download the right versions. ERROR:root:Connection has been refused by the server. Exit Traceback (most recent call last): File "continuous_driver.py", line 297, in runner() File "continuous_driver.py", line 115, in runner env = CarlaEnvironment(client, world,town, checkpoint_frequency=None) UnboundLocalError: local variable 'client' referenced before assignment
if i don't have it installed i face that issue : pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html Encoder could not be initialized.
Exit please i really need some help

@elpidak If you don't need to change the map, I strongly recommend downloading the pre-compiled version of CARLA (After uninstalling the previous version): https://github.com/carla-simulator/carla/releases

@longziyu https://carla-releases.s3.eu-west-3.amazonaws.com/Windows/CARLA_0.9.8.zip, i did, from there I donwnloaded the Carla.

@elpidak I noticed that your Python API version is 0.9.14, you are supposed to download CARLA with the corresponding version (0.9.14) instead of 0.9.8.

@longziyu I uninnstall the version, of that, so now i am trying to run it, i open the exe, i have add to the right folder the additional maps but i have that problem

:\Users\R4A\ThesisCode\Autonomous-Driving-in-Carla-using-Deep-Reinforcement-Learning-main> python continuous_driver.py --exp-name ppo --train False --town Town02 pygame 2.1.2 (SDL 2.0.18, Python 3.7.0) Hello from the pygame community. https://www.pygame.org/contribute.html Encoder could not be initialized.

image

@elpidak Your question sounds familiar; I believe I've encountered it before. However, it's been quite a while since I successfully replicated that code, and I can't quite recall how I managed to do it at the time. I'll make an attempt to rerun the project in the near future and will share the technical details with you then.

actually @longziyu I found a solution i put and printing of the exception in the encoder file and the problem was in the torch.load, so i searched all the mentions in the code and i added and argument map_location=torch.device('cpu'), so it should like that : torch.load((self.model_file), map_location=torch.device('cpu')))

I have cpu and apparrently it was loading checkpoints from cuda.

actually @longziyu I found a solution i put and printing of the exception in the encoder file and the problem was in the torch.load, so i searched all the mentions in the code and i added and argument map_location=torch.device('cpu'), so it should like that : torch.load((self.model_file), map_location=torch.device('cpu')))

I have cpu and apparrently it was loading checkpoints from cuda.

@elpidak Congratulations! If you're interested in a more standardized and maintainable CARLA reinforcement learning platform, consider exploring the Gym platform and the CARLA Leaderboard. Their standardized interfaces allow me to focus more on the algorithmic aspects.

actually @longziyu I found a solution i put and printing of the exception in the encoder file and the problem was in the torch.load, so i searched all the mentions in the code and i added and argument map_location=torch.device('cpu'), so it should like that : torch.load((self.model_file), map_location=torch.device('cpu')))
I have cpu and apparrently it was loading checkpoints from cuda.

@elpidak Congratulations! If you're interested in a more standardized and maintainable CARLA reinforcement learning platform, consider exploring the Gym platform and the CARLA Leaderboard. Their standardized interfaces allow me to focus more on the algorithmic aspects.

@longziyu i was actually trying to do the implementattion with Gym, but with the wrappers eveything got a little more complicated, I will see it again the next days. Thank you for your help :)

One more question @longziyu if you remember, the rewards at what point came close to the one's of the repo? Because are increasing in a much lower degree
image

One more question @longziyu if you remember, the rewards at what point came close to the one's of the repo? Because are increasing in a much lower degree image

@elpidak
A key distinction between reinforcement learning and traditional supervised learning is that in the process of balancing policy exploration and exploitation, the reward function and loss curves often exhibit significant fluctuations. Therefore, the curves you've provided are perfectly normal.

One more question @longziyu if you remember, the rewards at what point came close to the one's of the repo? Because are increasing in a much lower degree image

@elpidak A key distinction between reinforcement learning and traditional supervised learning is that in the process of balancing policy exploration and exploitation, the reward function and loss curves often exhibit significant fluctuations. Therefore, the curves you've provided are perfectly normal.

so @longziyu I am just keep training till I reach the rewards of 800?

One more question @longziyu if you remember, the rewards at what point came close to the one's of the repo? Because are increasing in a much lower degree image

@elpidak A key distinction between reinforcement learning and traditional supervised learning is that in the process of balancing policy exploration and exploitation, the reward function and loss curves often exhibit significant fluctuations. Therefore, the curves you've provided are perfectly normal.

so @longziyu I am just keep training till I reach the rewards of 800?

@elpidak Judging solely by rewards is not comprehensive. After a certain number of timesteps during training, you can evaluate RL algorithms using autonomous driving metrics, such as the success rate of reaching destinations and collision rates. If continued training does not significantly improve these metrics, it indicates that the model has converged.

One more question @longziyu if you remember, the rewards at what point came close to the one's of the repo? Because are increasing in a much lower degree image

@elpidak A key distinction between reinforcement learning and traditional supervised learning is that in the process of balancing policy exploration and exploitation, the reward function and loss curves often exhibit significant fluctuations. Therefore, the curves you've provided are perfectly normal.

so @longziyu I am just keep training till I reach the rewards of 800?

@elpidak Judging solely by rewards is not comprehensive. After a certain number of timesteps during training, you can evaluate RL algorithms using autonomous driving metrics, such as the success rate of reaching destinations and collision rates. If continued training does not significantly improve these metrics, it indicates that the model has converged.

@longziyu so what happens in that case?

One more question @longziyu if you remember, the rewards at what point came close to the one's of the repo? Because are increasing in a much lower degree image

@elpidak A key distinction between reinforcement learning and traditional supervised learning is that in the process of balancing policy exploration and exploitation, the reward function and loss curves often exhibit significant fluctuations. Therefore, the curves you've provided are perfectly normal.

so @longziyu I am just keep training till I reach the rewards of 800?

@elpidak Judging solely by rewards is not comprehensive. After a certain number of timesteps during training, you can evaluate RL algorithms using autonomous driving metrics, such as the success rate of reaching destinations and collision rates. If continued training does not significantly improve these metrics, it indicates that the model has converged.

@longziyu so what happens in that case?

@elpidak Just by looking at the image you sent, I can't assess the effectiveness of your model training. However, based on my experience with training RL algorithms, I suspect it's highly likely that it hasn't converged yet. Achieving satisfactory results with RL algorithms in CARLA often takes a considerable amount of time. For your case, I'd recommend continuing the training until the fluctuations in RL stabilize within a narrow range. Moreover, I'd like to point out that you can't solely rely on rewards or losses to evaluate the quality of an RL algorithm. For instance, just because Algorithm A has a higher average reward than Algorithm B doesn't necessarily mean it's more effective. We need to look for new metrics to evaluate them, tailored to the specific tasks at hand.