rverton / webanalyze

Port of Wappalyzer (uncovers technologies used on websites) to automate mass scanning.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Anyway to store the results in json or csv format?

salooali opened this issue · comments

Thanks for this tool..!

I'm just having a porbelm in storing the results in a json or csv format.

So what problem are you referring to? You need to describe whats not working for you, otherwise I cant help.

I've a huge list of domains and i want to detect what CMS or technologies they are using for this purpose i want to store the result for each domain in json or csv format but i didn't find any way to store the results

As you can see in the README.md (and via the -h flag), there is an option -output:

$ ./webanalyze -h
Usage of ./webanalyze:
  -apps string
        app definition file. (default "apps.json")
  -crawl int
        links to follow from the root page (default 0)
  -host string
        single host to test
  -hosts string
        filename with hosts, one host per line.
  -output string     <----------------------------------
        output format (stdout|csv|json) (default "stdout")
  -search
        searches all urls with same base domain (i.e. example.com and sub.example.com) (default true)
  -silent
        avoid printing header (default false)
  -update
        update apps file
  -worker int
        number of worker (default 4)

You can redirect the output to a file then via your shell, for example:
webanalyze -host google.com -output csv > result.csv

I don't understand what the problem is with redirecting it into a file. It does not matter how huge your list is, this will work.

You can load a list of domains to check and put all results in a single file:
webanalyze -hosts fileWithUrls.txt -output csv > fileWithResults.txt

Please have a look at the help output. I'm locking this issue now.