RenderKit / oidn

Intel® Open Image Denoise library

Home Page:https://www.openimagedenoise.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question: On what kind of data sets the denoiser was trained on?

APichou opened this issue · comments

Hello there!

I am curious about the kind of data sets the denoiser was initially trained on to get the result you can have right now by default. I have a series of questions for you if you don't mind:

  • Was it first on a scene with cubes and spheres or something similarly simple?
  • If yes, did you then move on to more complex scenes as you went along?
  • Did you train the denoiser in stages, concentrating on particular points such as low luminosity, transparency or reflections? Or was the training more general?
  • Do you have any examples of data sets you trained it on?

Please tell me more on how you managed to make it work so well, I am really curious!

Thanks in advance!

Hi!

The training scenes are mostly moderately complex scenes (e.g. architectural), not simple synthetic ones. The training images don't focus on particular features but are more general, with random camera views.

The following paper contains some useful details on how to create a good dataset for denoising: https://balint.io/nppd/
This isn't what OIDN is using but the approach described in this paper should work really well.

Hi!
Thank you very much for sharing this info, this is very interesting and helpful!