TechnitiumSoftware / DnsServer

Technitium DNS Server

Home Page:https://technitium.com/dns/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HTTP to HTTPS redirection can brick the web interface

skyboy opened this issue · comments

If HTTP to HTTPS redirection is enabled and you disable the HTTPS service, redirection remains enabled and the web console becomes inaccessible unless the config file is manually edited to turn it off.

Thanks for the feedback. However, the HTTPS redirection is only enabled if HTTPS is enabled and a cert is configured. You can refer to code here which shows all the conditions checked before enabling the redirection.

If you are seeing this redirection, I am not exactly sure what could be the reason. I would suggest that you try this again and check with another web browser or use private/incognito mode to check it. It would also help to open web browser's developer tools and check the network tab which will show you if there is any redirection.

One reason that can have an effect is that if you are using a subdomain name of a public domain name that you own and also have configured HSTS on your domain's web server then this gets cached by web browser and it will always attempt to connect using HTTPS.

  1. Thanks for the feedback. However, the HTTPS redirection is only enabled if HTTPS is enabled and a cert is configured. You can refer to code here which shows all the conditions checked before enabling the redirection.
  1. If you are seeing this redirection, I am not exactly sure what could be the reason. I would suggest that you try this again and check with another web browser or use private/incognito mode to check it. It would also help to open web browser's developer tools and check the network tab which will show you if there is any redirection.
  1. One reason that can have an effect is that if you are using a subdomain name of a public domain name that you own and also have configured HSTS on your domain's web server then this gets cached by web browser and it will always attempt to connect using HTTPS.
  1. I have it configured to a pseudo tld, so this is a non-issue

  2. I had checked extensively to try to regain access, however nothing worked and everything showed me being redirected

  3. And here I think I see what the problem was: I had disabled HTTPS while the application was running as I was attempting to make it reload the certificate; I had swapped the physical file and had removed the password, simply changing the password didn't appear to be working so I just unchecked https and hit save; I then stopped being able to connect because it kept redirecting me to a port it wasn't listening on

And here I think I see what the problem was: I had disabled HTTPS while the application was running as I was attempting to make it reload the certificate; I had swapped the physical file and had removed the password, simply changing the password didn't appear to be working so I just unchecked https and hit save; I then stopped being able to connect because it kept redirecting me to a port it wasn't listening on

The web server will automatically reload the TLS cert if you just replace the cert file with a new one. It checks periodically for the file's date modified and does reload if the date is updated. So you do not have to do anything to try to reload it.

The web server is reloaded for any changes in settings and it starts again with the latest settings. The app is designed to not require restarting for any changes so it reloads any updated settings just the same way when it first starts. All these events will show up in the DNS logs so I would suggest that you check the logs once to see if there were any errors in there. The logs will also tell if the web server was started successfully on a specific port.

Since I am unable to reproduce this issue, if you are still able to reproduce it then do let me know the exact steps to try to attempt to get the same issue. This will help fix any bug that is causing this issue.

  1. The web server will automatically reload the TLS cert if you just replace the cert file with a new one. It checks periodically for the file's date modified and does reload if the date is updated. So you do not have to do anything to try to reload it.

  2. The web server is reloaded for any changes in settings and it starts again with the latest settings. The app is designed to not require restarting for any changes so it reloads any updated settings just the same way when it first starts. All these events will show up in the DNS logs so I would suggest that you check the logs once to see if there were any errors in there. The logs will also tell if the web server was started successfully on a specific port.
     
    Since I am unable to reproduce this issue, if you are still able to reproduce it then do let me know the exact steps to try to attempt to get the same issue. This will help fix any bug that is causing this issue. [1]

  1. Good to know, but since the password had changed from <somepass> to <empty string> I needed to modify the settings anyway, in the attached log[2] you can see the dns server (sharing the cert) have that error, but I was having trouble getting the webserver to look like it was accepting a password change to an empty input field; based on the logs that change did take, but it wasn't clear on the interface at the time. However, the logs[2] do also show that the server did not restart on this change, which is why i kept seeing the old cert show up so this element in particular is a bug that should probably be addressed, if desired I can open a separate bug report for that.

  2. I had noticed a lack of hard restarts on settings changes, but my trust on consistency between methods of changing settings is low due to general experience - apologies, but also I think you know. The log[2] does show when I turned https off at [2024-06-01 07:20:56 UTC] and then there's just spam from the dns cert being bad until I had 'reverse-engineered' the save format (with the help of the source) and turned the https redirect off and rebooted the server at [2024-06-01 07:53:22 UTC]

Due to the general amount of stuff going on (and wrong) in the log[2] I feel I should explain:

  • It opens with DNSSec spam because I use that as an internal flag for record keeping, complaining about both pseudo-tlds that are 'self-signed' by the server (I have a CA set up and trusted, so not entirely correct on the terms) and lower domains that don't resolve, mostly the self-hosted tlds. I feel like this may be a bug? Or perhaps the software just isn't designed to functionally be used as a tld server and this is my fault.
  • I reconfigured my network in general to put the dns server on 10.10.10.10 for aesthetic reasons. This was an entire journey as i figured out the cryptic nonsense my router does (some of the advanced settings are only accessible though an advanced settings button in the basic settings instead of in the advanced settings) and took a bit of time and left some network and binding errors lingering as i got the subnet mask sorted out while the device the server is on had two IPs to one physical port
  • This then had me updating the cert to include the new IP, and in the doing I removed its password because i realized the impossible to bypass prompts from openssl for packaging certs in pfx (why does -nodes work for reading but not writing?) are happy to accept nothing and no password is easier to remember for something that's internally facing and ideally set-and-forget for a few years at a time
  • This led to the adventure that is the subject of this bug report during [2024-06-01 07:12:48 UTC] :: [2024-06-01 07:53:22 UTC]

I have additionally sanitized some domain lookups and the zones I host with <snip>; for both privacy and because I seize local control of a large number of tracking and advertising domains to redirect to an internal server that lets me inject userscripts across my devices and without the painfully slow userscript addons for browsers and do not see a reason to direct attention to those websites

[1]: I will be setting up a VM so I can attempt to reliably reproduce the issue later today, as I do not wish to do it to my server again; if you have any builds with additional debug logging (such as for https redirection state?), I would be happy to use those instead of the release builds.
[2]: dns.log

[1]: I will be setting up a VM so I can attempt to reliably reproduce the issue later today, as I do not wish to do it to my server again; if you have any builds with additional debug logging (such as for https redirection state?), I would be happy to use those instead of the release builds.

After poking the relevant settings in multiple combinations I couldn't reproduce the issue, though this was on a different machine and architecture without fully reproducing the settings; I guess tomorrow I'll bite the bullet and copy my settings then try reproducing it on the actual server and see if it's something specific to that install.
Ignore the untrusted root errors, I used an old copy of ubuntu server that was convenient and it doesn't know what let's encrypt is / has a problem with X2.
testdns.log

After further testing the actual server today I did discover that the cache settings of the web interface can reproduce the issue, which makes me suspect I may have been experiencing a hyper-specific bug in firefox:
At the time I had two separate profiles of firefox running and tested connecting in both of them and had intermittently used both in all 4 modes throughout the weeks leading up. While 300 seconds shouldn't have produced the issue these particular instances had been running continuously for more than 150 days. I had additionally been experiencing issues getting fresh copies my own scripts. My assumption had been a configuration error in the chain of logic leading to their delivery (mod_rewrite/headers involvement) based on this it may have just been an issue with the process having been running for so long it stopped being able to correctly validate timestamps thanks to some overflow somewhere.

It will take a considerable amount of time before I can test that idea, as I just recently forkbombed myself through a typo and everything is fresh. So unless there's a race condition where different threads can induce a configuration error in the middle of restarting this can be closed after I get feed back on I should open separate issues for these two items:

  • Web server doesn't restart when cert password is changed but cert file path remains the same
  • DNSSec doesn't support signing pseudo-tlds (or I have misconfigured/misused something)

Thanks for the details.

Good to know, but since the password had changed from to I needed to modify the settings anyway, in the attached log[2] you can see the dns server (sharing the cert) have that error, but I was having trouble getting the webserver to look like it was accepting a password change to an empty input field; based on the logs that change did take, but it wasn't clear on the interface at the time. However, the logs[2] do also show that the server did not restart on this change, which is why i kept seeing the old cert show up so this element in particular is a bug that should probably be addressed, if desired I can open a separate bug report for that.

The DNS server will reload settings for both change in TLS cert path or change of the PFX file password. You can check the conditions in here. This has been tested and works as expected.

I had noticed a lack of hard restarts on settings changes, but my trust on consistency between methods of changing settings is low due to general experience - apologies, but also I think you know. The log[2] does show when I turned https off at [2024-06-01 07:20:56 UTC] and then there's just spam from the dns cert being bad until I had 'reverse-engineered' the save format (with the help of the source) and turned the https redirect off and rebooted the server at [2024-06-01 07:53:22 UTC]

The DNS cert error in the logs tells that you have configured Optional Protocols and that the TLS cert has incorrect password specified. This is however not related to the HTTP web service as the TLS cert option for it is separate.

It opens with DNSSec spam because I use that as an internal flag for record keeping, complaining about both pseudo-tlds that are 'self-signed' by the server (I have a CA set up and trusted, so not entirely correct on the terms) and lower domains that don't resolve, mostly the self-hosted tlds. I feel like this may be a bug? Or perhaps the software just isn't designed to functionally be used as a tld server and this is my fault.

The logs regarding DNSSEC are due to the primary zones for private TLDs that are DNSSEC signed. Since, these domain name do not exist in public DNS, the DNS server is failing to resolve the DS records for them to update the Key Signing Key's status. This is not a bug or issue. Its just that private domain names are not supposed to be signed by DNSSEC since they will fail validation. I would recommend that you unsign those private zones to stop these errors being logged.

Web server doesn't restart when cert password is changed but cert file path remains the same

As mentioned in earlier, the password change for cert also triggers reload of the cert. If you can still manage to provide steps to reproduce this issue then I can test it again to confirm.

DNSSec doesn't support signing pseudo-tlds (or I have misconfigured/misused something)

DNSSEC signing for private domain names has no meaning and should not be done. This is since these domain names would fail to validate anyways.

The DNS cert error in the logs tells that you have configured Optional Protocols and that the TLS cert has incorrect password specified. This is however not related to the HTTP web service as the TLS cert option for it is separate.

The DNS cert error was because I use the same cert for both DNS and the admin panel and had not updated it prior to the web interface; it's largely irrelevant but was a timing-locator event and has no value beyond visibly standing out in the logs when searching

The DNS server will reload settings for both change in TLS cert path or change of the PFX file password. You can check the conditions in here. This has been tested and works as expected.


As mentioned in earlier, the password change for cert also triggers reload of the cert. If you can still manage to provide steps to reproduce this issue then I can test it again to confirm.

This does change the path/password for when it gets around to reloading it, but does not immediately trigger a reload of the cert / server as far as I can tell. However, there are layers here and I am unfamiliar with the code base and seeing the same cert may be a caching issue.

The logs regarding DNSSEC are due to the primary zones for private TLDs that are DNSSEC signed. Since, these domain name do not exist in public DNS, the DNS server is failing to resolve the DS records for them to update the Key Signing Key's status. This is not a bug or issue. Its just that private domain names are not supposed to be signed by DNSSEC since they will fail validation. I would recommend that you unsign those private zones to stop these errors being logged.


DNSSEC signing for private domain names has no meaning and should not be done. This is since these domain names would fail to validate anyways.

Fair enough, but is there a reason for not supporting it? I have my own personal trusted root CA, so it should validate unless there are implementation details to DNSSec that prevent it from being used on different internet roots -- though you may not wish to support creating private forks of the internet regardless and that would be your prerogative.

This may fall under a feature request if you don't have any problem with it and the complications of becoming a multi-root resolver as there are quite a few alternative roots: https://en.wikipedia.org/wiki/Alternative_DNS_root#Implementations


Within scope of the issue itself: you may wish to change the cache settings served from http when the redirect on so that a cached response does not continue to redirect you? Or using cache settings that require re-validating the etag so the browser catches this situation.

This does change the path/password for when it gets around to reloading it, but does not immediately trigger a reload of the cert / server as far as I can tell. However, there are layers here and I am unfamiliar with the code base and seeing the same cert may be a caching issue.

The certs are loaded immediately if the path or the password for the cert changes. You can check the code here.

Fair enough, but is there a reason for not supporting it? I have my own personal trusted root CA, so it should validate unless there are implementation details to DNSSec that prevent it from being used on different internet roots -- though you may not wish to support creating private forks of the internet regardless and that would be your prerogative.

DNSSEC and PKI are totally different systems. With a trusted root CA installed on your clients, you can host HTTPS services on your local network. But, with DNSSEC, there is no support for DANE available in any web browser or HTTP clients to be able to benefit from a private signed primary zone.

There is no issue with support for signing the private zone, you already have it signed. The error you see is just a feature which checks for DS record in the parent zone (root zone in this case) just so that it can show you if your zone's Key Signing Key (KSK) is "Active". That is, if the KSK has the relevant DS record in the parent zone so as to know if the key is working as expected. So, I will just update the feature to stop this check once it finds out if the zone is private. This will prevent the frequent error logs being generated.

This may fall under a feature request if you don't have any problem with it and the complications of becoming a multi-root resolver as there are quite a few alternative roots: https://en.wikipedia.org/wiki/Alternative_DNS_root#Implementations

There is already support for alternative root servers. You will just need to configure a secondary zone or a stub zone for the alternative root. I am not sure if the alternative root servers support DNSSEC. If they do, you can just edit the root-anchors.xml file that is in the same installation folder and add another entry for the alternative root's DS records.

But, this is also of not much use since just signing the zones is one thing, the clients must be able to validate those zones too. If your local clients do not have DNSSEC validation enables, there is no point to sign these private/alternate zones in first place.

Within scope of the issue itself: you may wish to change the cache settings served from http when the redirect on so that a cached response does not continue to redirect you? Or using cache settings that require re-validating the etag so the browser catches this situation.

The 302 redirections are not supposed to be cached. Only 301 redirections are cached since those are permanent. So, cannot do much about it since this issue is not reproducible. I have tested this with 3 popular web browsers and all of them are able to switch from HTTPS to HTTP without issues when the settings are updated to disable HTTPS.

The 302 redirections are not supposed to be cached. Only 301 redirections are cached since those are permanent. So, cannot do much about it since this issue is not reproducible. I have tested this with 3 popular web browsers and all of them are able to switch from HTTPS to HTTP without issues when the settings are updated to disable HTTPS.

I went looking to find out why I am seeing caching happen here, and while the latest spec doesn't mention caching at all within the definition - which I personally disapprove of, as this creates a local ambiguity when following hotlinks - it does have a more distant definition explicitly[1] forbidding it.
Unfortunately, any browser conforming to the older spec will be caching responses because formerly the spec explicitly allows caching for 302 responses when they contain cache control headers: https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.3
Annoyingly it also allows caching 307 responses, so apparently the most compatible way to implement 302/7 is to not send cache-control headers with it.

The updated spec is just barely 2 years old now, which explains what's happening.

[1]: Edit: I may be incorrect, actually; it says heuristically, no talk within this document about what should be happening when it carries explicit cache information: all other status codes are not heuristically cacheable.

Edit: I was entirely incorrect and near the end of the document we have the full definitions: https://www.rfc-editor.org/rfc/rfc9110.html#name-considerations-for-new-stat

The definition of a new final status code ought to specify whether or not it is heuristically cacheable. Note that any response with a final status code can be cached if the response has explicit freshness information. A status code defined as heuristically cacheable is allowed to be cached without explicit freshness information.

So sending cache-control with a 302 makes it possible for a browser to cache it; whether or not it does will be implementation-specific however since the definition there says can instead of must.

I went looking to find out why I am seeing caching happen here, and while the latest spec doesn't mention caching at all within the definition - which I personally disapprove of, as this creates a local ambiguity when following hotlinks - it does have a more distant definition explicitly[1] forbidding it. Unfortunately, any browser conforming to the older spec will be caching responses because formerly the spec explicitly allows caching for 302 responses when they contain cache control headers: https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.3 Annoyingly it also allows caching 307 responses, so apparently the most compatible way to implement 302/7 is to not send cache-control headers with it.

The updated spec is just barely 2 years old now, which explains what's happening.

[1]: Edit: I may be incorrect, actually; it says heuristically, no talk within this document about what should be happening when it carries explicit cache information: all other status codes are not heuristically cacheable.

Edit: I was entirely incorrect and near the end of the document we have the full definitions: https://www.rfc-editor.org/rfc/rfc9110.html#name-considerations-for-new-stat

The definition of a new final status code ought to specify whether or not it is heuristically cacheable. Note that any response with a final status code can be cached if the response has explicit freshness information. A status code defined as heuristically cacheable is allowed to be cached without explicit freshness information.

So sending cache-control with a 302 makes it possible for a browser to cache it; whether or not it does will be implementation-specific however since the definition there says can instead of must.

Thanks for the detailed analysis. The web server however does not set any cache-control header for the redirect. It also uses 307 redirect. I checked this with Firefox and you can see the headers below:

image

I tested again if the browser was caching the 307 redirect and it did not. In all tests, the redirection from HTTP to HTTPS stopped when the HTTPS option was disabled.

There is a response-timing issue on a LAN where it sends me back to and serves the https page before shutting down, but that's client code and doesn't affect remote management thanks to network latency, however I'm not seeing those either, and I was definitely observing caching behavior as I wound up consistently at https: from http:

Was the entire issue caused by me having a firefox process that had lived for over 5 months? I imagine they don't do any long-bake tests that are so extreme...
Well, that's going to be a long time to find out.


I will open at least one other issue for a minor thing, and if I come across anything other than that one actual UX issue as I build a small app on top of Technitium:
I plan to make it easy for me to DNS-hole myself into a 'safe' corner of the internet to smack down all of those ad-serving domains that are just strings of random ascii characters that defeat pattern matching by operating exclusively off a DNS whitelist that I can just click buttons on to add/remove sites that attempt to load to various lists and blacklist *. It's... going to be an experience, I'm sure.

Apologies for the drawn out report, I'd wanted to make sure the issue wouldn't impact anyone else, and well. It won't.

Apologies for the drawn out report, I'd wanted to make sure the issue wouldn't impact anyone else, and well. It won't.

I appreciate these reports which help fix issues.