ddclient spams unnecessary updates when updating more than one host
guiniol opened this issue · comments
When updating more than one host, ddclient sends updates even if the IP has not changed. This is new in 3.11.1, 3.10.0 does not show this behaviour with the same configuration.
Relevant configuration snippet:
protocol=dyndns2
use=web, web=http://ipv4.nsupdate.info/myip
ssl=yes # yes = use https for updates
server=ipv4.nsupdate.info
login=your.domain.1
password='HIDDEN'
your.domain.1
login=your.domain.2
password='HIDDEN'
your.domain.2
I ran git bisect
and found that the first bad commit is e910204, though looking at the diff, I do not see how that would be related.
For now, I've reverted to the 3.10.0 version, but I'm happy to help/test if I can.
Could you share the cache file as well as the verbose ddclient log after an update? (Both after a single host update and a multi-host update)
The cache file is located at /var/cache/ddclient/ddclient.cache
by default.
The verbose log can be generated via ddclient -daemon=0 -debug -verbose -noquiet
(Note: Be very careful redacting all the mentions of your secrets in there!)
For each of multi-host and single-host, I deleted the cache and ran ddclient twice to regenerate it and then use it.
Here are the logs:
I run into the same kind of problem. Even if there was no need to update the dyn.com info it did it. I played around and for me it didn't matter if I had one host or three hosts at Dyn. (two custom hosts and 1 dynamic host)
However what fixed it was the following:
use=if, if=interface
didn't work. However with the following
usev4=ifv4, ifv4=interface
it worked. From this I draw the conclusion that there has popped in a bug when introducing the ipv4/ipv6 fixes.
Using ddclient 3.11.1
/Thomas
Switching to usev4=webv4, webv4=http://ipv4.nsupdate.info/myip
seems to have fixed it for me too. I also noticed that the non v4/v6 options are deprecated, so I'll keep the config as is. That being said, it'd be great if the deprecated options were either not allowed or worked as expected.
Great that it solved your problem as well!
I started looking into the code but perl is none of my favorite languages... :(
I see two solutions. Either the code is fixed or the depreciated options are removed. The worst problem right now is that people upgrading, through their favorite distribution, will break the update mechanism and eventually get blocked at dyn.com if they keep their old configuration files. That is what almost happened to me...
/Thomas
tl;dr: From what i can tell, the caching fails when legacy use
is used because the cache doesn't have the the legacy ip
entry (it only has the ipv4
entry)
See here for the caching logic.
With use
the "IP [...] not cached?" fails due to the ip
key missing in cache.
The reason the ip
key is missing is that the provider functions set them directly - legacy providers would set ip
while new providers would set ipv4
and ipv6
.
To make old use
work with new providers and new usev*
work with old ones, glue code fills in one from the other.
This has already popped up with Cloudflare, where the legacy status
key was not being set from the status-ipv*
keys the Cloudflare provider implementation sets (see pull request, code).
The fix should look just like that, adding in the relevant glue code to fill in ip
from ipv*
.
I don't have more time right now to properly test the fix, but I put it in #595 if anyone wants to check already.
Expected behavior would be that when use
is configured, the cache now has both ipv4
and ip
entries and repeated runs do not do erroneous updates.
Thanks a lot for proposing a fix!
I will give it a try, not today, but later this week. Hope that is OK!?
/Thomas
The fix seems to work for me. I see lines like:
SUCCESS: domain1.nerdpol.ovh: skipped: IP address was already set to ${MY_IP}.
And also:
cache{domain1.nerdpol.ovh}{ip} : ${MY_IP}
cache{domain1.nerdpol.ovh}{ipv4} : ${MY_IP}
Perfect - i'll get a release with the new fix out in the coming days 👍
Just for the records: The fix works for me as well!
Thank you very much for fixing it!
Release v3.11.2 is out and contains the fix for this - please confirm resolution to close this issue 👍
Closing this - no reports of the bug persisting and/or the fix breaking something 👍