Cuda version mismatch.
vpenades opened this issue · comments
Ok, after updating my nVidia drivers, with the related issue I reported in #40, I got yet another problem:
When running an inference, I got this exception:
Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
Now, I looked for this error and github and Tensorflow forums are full of reports like this, pointing that it's a problem with Cuda drivers.
It seems that Emgu.TF deploys cudnn64_7.dll 10.1.105 , and my nVidia drivers come with NVCuda64.DLL 11.0.208
Now, most issues I've read around state that the solution is to downgrade the drivers to Cuda 10.2.... for which you need to install Cuda Toolkit 10.2 ... which is a 2GB download. This might be acceptable for a developer, but not for an end user.
Also, I've tried to look for older versions of the cuda runtime at nVidia site, and everything points to the toolkits, so it seems the runtimes are tied to the nVidia main drivers.... so there's no way to install a specific cuda driver other than with the toolkit.
It also surprises me that the Cuda drivers are not backwards compatible (as in DirectX or OpenGL), which means that Cuda is essentially unusable for any use other than proffesional use cases.
The new Emgu TF 2.4.1 release is built with CUDA 11.1 and CuDNN 8. Please give it a try.
Yes, it's been working for a while now, thanks