ctxis / SnitchDNS

Database Driven DNS Server with a Web UI

Home Page:https://www.contextis.com/en/resources/tools/snitchdns

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Feature Request: Unauthenticated Mail Notifications

rwjack opened this issue · comments

Since in my case, Snitch runs locally, there is no need for mail authentication for notifications.
image

Good point actually, I'll remove the requirement for all fields to be filled.

All done, pull the latest and greatest and give it a go. I've tested it locally so hopefully "it will work on your machine too". If it doesn't feel free to re-open this issue.

Awesome, I'm able to save the settings, but another strange issue arises.

So first things first:

Even though my mail server DNS record is set in snitch's DNS records
and on the snitch server, if I do dig mail.domain.tld @127.0.0.1 -p 2024
It properly returns what it should:

;; ANSWER SECTION:
mail.domain.tld.		10800	IN	CNAME	dmz.home.lan.
mail.domain.tld.		10800	IN	A	10.10.0.1

But snitch seems to try to talk to it from the outside, completely ignoring internal DNS records:
image
Access denied error is pretty self explanatory, unauthenticated mail is only available from the internal network, although snitch says it did query for the record and did not forward it, yet the server was still approached from the outside:
image

I think the issue might be that the DNS server itself, that is hosting snitch, does not have snitch set as it's DNS resolver. But I wasn't able to configure this since the snitch DNS resolver is listening on port 2024, and apparently the IPtables rules only redirect outside 53 traffic to internal port 2024. Meaning dig example.org @127.0.0.1 times out.

This might related to another issue I had where my docker host which is hosting uptime kuma was not able to resolve any hosts, when I had snitch as it's dns, the second I switched back to pihole it resolved everything. This was extremely strange because the docker host could properly resolve DNS records set by snitch, and so could the uptime kuma container itself, but the kuma web-app could not, extremely strange.

Next up, I have tried running as root (commented out user and group for systemd unit)
And binding to port 53 ./venv.sh flask settings set --name dns_daemon_bind_port --value 53
But now the SnitchDNS daemon won't start up at all. Only getting a weird message which shouldn't be a problem?

Feb 27 13:06:29 ho-dns systemd[1]: snitchdns.service: Ignoring invalid environment assignment 'if [ "${BASH-}" ] && [ "$BASH" !=/bin/sh]; then': /etc/profile
Feb 27 13:06:29 ho-dns systemd[1]: snitchdns.service: Ignoring invalid environment assignment 'if [ "${BASH-}" ] && [ "$BASH" !=/bin/sh]; then': /etc/profile
Feb 27 13:06:29 ho-dns systemd[1]: Starting SnitchDNS Gunicorn...
░░ Subject: A start job for unit snitchdns.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit snitchdns.service has begun execution.
░░
░░ The job identifier is 152660.
Feb 27 13:06:29 ho-dns gunicorn[2437273]: [2022-02-27 13:06:29 +0100] [2437273] [INFO] Starting gunicorn 20.1.0
Feb 27 13:06:29 ho-dns gunicorn[2437273]: [2022-02-27 13:06:29 +0100] [2437273] [INFO] Listening at: http://12.0.0.2:8080 (2437273)
Feb 27 13:06:29 ho-dns gunicorn[2437273]: [2022-02-27 13:06:29 +0100] [2437273] [INFO] Using worker: sync
Feb 27 13:06:29 ho-dns gunicorn[2437275]: [2022-02-27 13:06:29 +0100] [2437275] [INFO] Booting worker with pid: 2437275
Feb 27 13:06:29 ho-dns gunicorn[2437276]: [2022-02-27 13:06:29 +0100] [2437276] [INFO] Booting worker with pid: 2437276
Feb 27 13:06:29 ho-dns gunicorn[2437277]: [2022-02-27 13:06:29 +0100] [2437277] [INFO] Booting worker with pid: 2437277
Feb 27 13:06:31 ho-dns python3[2437274]: SnitchDNS Daemon is not configured - aborting.
Feb 27 13:06:31 ho-dns systemd[1]: Started SnitchDNS Gunicorn.

Switching back to port 2024, when the service is running as root, starts up the SnitchDNS daemon. So this seems to be a port/permission issue. Though nothing else is listening on port 53. Any ideas on how to debug?

Though running manually seems to work on port 53. bash /opt/snitch/venv.sh flask snitch_daemon --bind-ip 0.0.0.0 --bind-port 53

For the first issue - SnitchDNS not using itself for resolutions, it's probably because iptables only redirects traffic that does not originate from localhost. Perhaps this could help https://stackoverflow.com/a/28170005 but as a workaround you could set the IP directly as the mail host in the settings.

For the second one, although SnitchDNS is running as root when you click the "start" button in the GUI that's actually the www-data user trying to run the server and as port 53 requires high privileges to bind, it gets denied.

I probably need to learn how to read properly.

The error you are getting for the 2nd issue (running as root) is actually a result of me proactively preventing you from running it under privileged ports

def is_configured(self):
if self.port < 1024 or self.port > 65535:
return False

The reason I chose to do this is exactly because someone could try and run it as root, and was trying to avoid this scenario:

  • SnitchDNS runs as root (for whichever reason).
  • It's also public-facing.
  • There's a vulnerability and it's compromised.
  • Oh no. Now they are root too.

Yeah setting the mail server IP would work, but it really should be a temporary workaround.

I'm completely ignoring the GUI here, so I changed the systemd unit to run as root, and then start the service, that should run everything as root? Okay that piece of code makes sense now.

I'll play around and see what can be done, though as I said running manually with port 53 works, but I'm having the same issue with uptime kuma, doesn't resolve CNAME->A records, though it does resolve direct A records. Will experiment more.

Upon further investigation, this seems like a strange issue I do not completely understand.

Here's what doesn't work:
Making a catch all zone for home.lan, and pointing it to a CNAME: load_balancer.home.lan record, and making a load_balancer.home.lan zone, which resolves to an A: 10.0.0.1 record - And from there, that would redirect every service to the load balancer cname, which would resolve as an A record later.
Eg.

  • service1.home.lan would resolve to CNAME: load_balancer.home.lan
  • load_balancer.home.lan would resolve to A: 10.0.0.1
    This works inside the browser and terminal, does not work on uptime kuma.

What makes uptime kuma work:
Making a zone for each service I want, eg. service1.home.lan, service2.home.lan ....
Then pointing that service directly to the load balancer A record.

Looking at it, yes it works, but it's not really a proper configuration. First off I'd have to make zones for all the services - 20+ of them, that would be something I could live with, at least if I could set the same CNAME record for all those esrvices, but that doesn't work for some reason, instead I have to add A records for the load balancer, and the worst part - Imagine I change my load balancer IP. I would have to change all the A records again for all the services, instead of just updating a single CNAME record.

I hope words were enough for explaining this, and I would love to work together to find a solution. Snitch seems like a really cool project.

TBD - Hang on, found another bug

Can you try the "didn't work" setup again with the develop branch please?

You're an absolute legend!

It seems to resolve properly now. It was the exact issue from your previous comment.

Oh, and another thing for your notes, at first I couldn't get Snitch working at all by simply following your documentation.

ImportError: cannot import name 'soft_unicode' from 'markupsafe'

So I just did the following:

diff --git a/requirements.txt b/requirements.txt
index 2f24eee..ced7fb3 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,4 +1,4 @@
-Flask<2.0.0
+Flask
Flask-SQLAlchemy
Flask-Migrate
Flask-WTF

Glad we got it working!

As for the installation error - thank you for letting me know, it looks like markupsafe v2.1.0 has removed that function as described here: https://markupsafe.palletsprojects.com/en/2.1.x/changes/#version-2-1-0

I honestly have no clue how you got it to work by removing <2.0.0 cause that would install Flask v2 and SnitchDNS definitely doesn't work with it! I've updated requirements.txt to include

markupsafe==2.0.1
which fixes the issue.

If you have any other issues or ideas for new features, let me know!