hakluke / hakrawler

Simple, fast web crawler designed for easy, quick discovery of endpoints and assets within a web application

Home Page:https://hakluke.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No output, process exiting normally

zPrototype opened this issue · comments

I've installed hakrawler via the provided go install command on my ubuntu:20.04 machine. I've got a list of subdomains separated by line breaks, now when I try to run hakrawler I get not output. The command I ran is:
cat probe_again.txt | hakrawler -t 1 -u -d 3 -h "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:100.0) Gecko/20100101 Firefox/100.0" | tee crawl.txt

I've also tried to add the -subs flag and ran it in docker with:
cat probe_again.txt | docker run --rm -i hakrawler_docker -t 1 -u -d 3 -h "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:100.0) Gecko/20100101 Firefox/100.0"
Again the same issue. Running just hakrawler provides the normal help message with all commands as expected.

what's in probe_again.txt?

Subdomains separated by line brake. Like:
abc.test.com
this.example.com
my.cooldomain.net
Etc..

You need to provide full URLs, like https://www.google.com not just the domain name.
Try something like this

https://abc.test.com
http://example.com

^ what he said

Can confirm, that was the issue. Maybe add a check to see if domains are missing a protocol and throw an exception? Not too good with GO otherwise I'd do a PR but I'm sure it would be helpful!

Can confirm, that was the issue. Maybe add a check to see if domains are missing a protocol and throw an exception? Not too good with GO otherwise I'd do a PR but I'm sure it would be helpful!

wouldn't want to stop the program for one invalid url in a big list, but perhaps there could be a message on stderr