AndreWeiner / ml-cfd-lecture

Lecture material for machine learning applied to computational fluid mechanics

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

An issue about steady and transient

jiangzhangze opened this issue · comments

Hi, @AndreWeiner
In this project drl_control_cylinder
Have you tried to train agent by using the results of steady numerical simulation and test it on transient numerical simulation?
I have a naive conjecture:The agent who was trained by using the velocity fields of the steady numerical simulation may have the same performance as good as the agent who was trained by using the velocity fields of the transient numerical simulation.And it‘s known that steady numerical simulation is faster than transient numerical simulation.

What I haved said above is just my conjecture:I would be very appreciate if you give me some advices or criticisms.

Hi,
in my opinion this should work. I attached a plot comparing the performances of the cylinder flow if the agent starts at t = 4s (beginning of quasi-steady state) and a simulation where the agents starts directly at t = 0s. Both simulations use the same policy, trained for 80 episodes with a buffer size of 10 and a trajectory length of 4s. As you can see the agent performs quite well in the transient phase although trained with the steady flow fields.

Regards, Janis
comparison_cl_cd

Dear @jiangzhangze,
the answer depends very much on your definition of steady. If you mean quasi-steady, refer to Janis' answer above. Note that the quasi-steady solution is computed using a transient solver, e.g., pimpleFoam. If you are referring to a truly steady solution, e.g., one computed with simpleFoam, it is not possible to conduct a training with such a simulation. The reasons are as follows: I) simpleFoam will not converge since there is no steady solution if Re is above roughly 60. II) even if you use the oscillatory non-converged output of simpleFoam, you still need a physically meaningful time step the train and use the control law; moreover, there would be nearly no computational benefit in the latter case.
Hope this answers you question. Otherwise, please clarify.
Best, Andre

Hi@AndreWeiner @JanisGeise
Thanks for your reply. And here is another problem:How to train agent at a special time (such as t=4s)?

Hi @jiangzhangze,
please have a look at drlfoam and in particular at the run_training.py file. In the new framework, experimenting with various DRL settings is much easier. I will also port the lecture material to drlfoam in a couple of weeks.
Best, Andre

Thank you very much.

I close this issue for now. Please re-open if needed.
Cheers