How to supply a list of URLs for brute forcing in Nmap? - nmap

Does anyone know how to supply a list of URLs for http-brute.nse script? It looks like I'm limited to only supplying a list of IP addresses and through the http-brute.path argument I can provide one path.
Any simple way to just supply a list of known http authentication URLs?
I've looked at the NSE scripts manual and couldn't find anything making it that easy.
Something like:
ie; nmap --script http-brute --script-args=urls.txt

Nmap doesn't provide a way to directly supply a list of URLs to the http-brute script. But you can do it with a little bash script such as:
#!/bin/bash
while read URL; do
nmap --script http-brute -p 80 --script-args 'http-brute.path=${URL}' host
done < urls.txt

Related

wireshark 2.2.5 - how to set ESP preference from command line

I'm finding a way to set ESP preference, i.e. encryption keys, authentication keys, from command line. I have tried below command but wireshark always says no preference matches mine
tshark -i - -Y "sip||esp" -d tcp.port=="5000-65535",sip -d
udp.port=="5000-65535",sip -T text -l -O "sip,esp" -o
esp.enable_null_encryption_decode_heuristic:true -o
esp.enable_authentication_check:true -o
esp.enable_encryption_decode:true -o "esp.sa_1:IPv4|*|*|*" -o
"esp.encryption_algorithm_1:AES-CBC [RFC3602]" -o
"esp.encryption_key_1:0xC5DA46E7FF43C8D6C0DD3A2707E42E05" -o
"esp.authentication_algorithm_1:HMAC-MD5-96 [RFC2403]" -o
"esp.authentication_key_1:0xE5A349FCBAD409D15C766702CD400BA4" >
D:\test\dump2.txt
It's always said that "esp.sa_1" flag is unknown. Same as esp.encryption_algorithm_1 and esp.authentication_algorithm_1, and so on.
I have searched around and think that esp.sa_1 is only available in older version of wireshark.
Does anyone know how to have these preference on wireshark 2.2.5?
Thank so much!
Unfortunately, the ESP preferences wiki page is out of date. The ESP preferences have been changed to a UAT (User Access Table), so you can more simply create an esp_sa file instead of specifying individual preferences. Probably the easiest way to learn the format of the file is to create one in Wireshark first, but from the source code, you can see that each entry contains the following information:
Protocol used
Source address
Destination address
SPI
Encryption algorithm
Encryption key
Authentication algorithm
Authentication key
For example, an entry might look like:
"IPv4","","","","AES-CBC [RFC3602]","0xC5DA46E7FF43C8D6C0DD3A2707E42E05","HMAC-MD5-96 [RFC2403]","0xE5A349FCBAD409D15C766702CD400BA4"
But if you really want to specify these options on the command-line rather than creating or modifying the esp_sa file, then you can do so. From section 10.2, Start Wireshark from the command line, of the Wireshark User Guide:
User access tables can be overridden using “uat,” followed by the UAT file name and a valid record for the file:
wireshark -o "uat:user_dlts:\"User 0 (DLT=147)\",\"http\",\"0\",\"\",\"0\",\"\""
The example above would dissect packets with a libpcap data link type 147 as HTTP, just as if you had configured it in the DLT_USER protocol preferences.
So, in your case, you would use something like so:
Unix
tshark.exe -o 'uat:esp_sa:"IPv4","","","","AES-CBC [RFC3602]","0xC5DA46E7FF43C8D6C0DD3A2707E42E05","HMAC-MD5-96 [RFC2403]","0xE5A349FCBAD409D15C766702CD400BA4"'
Windows
tshark.exe -o "uat:esp_sa:\"IPv4\",\"\",\"\",\"\",\"AES-CBC [RFC3602]\",\"0xC5DA46E7FF43C8D6C0DD3A2707E42E05\",\"HMAC-MD5-96 [RFC2403]\",\"0xE5A349FCBAD409D15C766702CD400BA4\""

wget appends query string to resulting file

I'm trying to retrieve working webpages with wget and this goes well for most sites with the following command:
wget -p -k http://www.example.com
In these cases I will end up with index.html and the needed CSS/JS etc.
HOWEVER, in certain situations the url will have a query string and in those cases I get an index.html with the query string appended.
Example
www.onlinetechvision.com/?p=566
Combined with the above wget command will result in:
index.html?page=566
I have tried using the --restrict-file-names=windows option, but that only gets me to
index.html#page=566
Can anyone explain why this is needed and how I can end up with a regular index.html file?
UPDATE: I'm sort of on the fence on taking a different approach. I found out I can take the first filename that wget saves by parsing the output. So the name that appears after Saving to: is the one I need.
However, this is wrapped by this strange character â - rather than just removing that hardcoded - where does this come from?
If you try with parameter "--adjust-extension"
wget -p -k --adjust-extension www.onlinetechvision.com/?p=566
you come closer. In www.onlinetechvision.com folder there will be file with corrected extension: index.html#p=566.html or index.html?p=566.html on *NiX systems. It is simple now to change that file to index.html even with script.
If you are on Microsoft OS make sure you have latter version of wget - it is also available here: https://eternallybored.org/misc/wget/
To answer your question about why this is needed, remember that the web server is likely to return different results based on the parameters in the query string. If a query for index.html?page=52 returns different results from index.html?page=53, you probably wouldn't want both pages to be saved in the same file.
Each HTTP request that uses a different set of query parameters is quite literally a request for a distinct resource. wget can't predict which of these changes is and isn't going to be significant, so it's doing the conservative thing and preserving the query parameter URLs in the filename of the local document.
My solution is to do recursive crawling outside wget:
get directory structure with wget (no file)
loop to get main entry file (index.html) from each dir
This works well with wordpress sites. Could miss some pages tho.
#!/bin/bash
#
# get directory structure
#
wget --spider -r --no-parent http://<site>/
#
# loop through each dir
#
find . -mindepth 1 -maxdepth 10 -type d | cut -c 3- > ./dir_list.txt
while read line;do
wget --wait=5 --tries=20 --page-requisites --html-extension --convert-links --execute=robots=off --domain=<domain> --strict-comments http://${line}/
done < ./dir_list.txt
The query string is required because of the website design what the site is doing is using the same standard index.html for all content and then using the querystring to pull in the content from another page like with script on the server side. (it may be client side if you look in the JavaScript).
Have you tried using --no-cookies it could be storing this information via cookie and pulling it when you hit the page. also this could be caused by URL rewrite logic which you will have little control over from the client side.
use -O or --output-document options. see http://www.electrictoolbox.com/wget-save-different-filename/

How do I get the raw version of a gist from github?

I need to load a shell script from a raw gist but I can't find a way to get raw URL.
curl -L address-to-raw-gist.sh | bash
And yet there is, look for the raw button (on the top-right of the source code).
The raw URL should look like this:
https://gist.githubusercontent.com/{user}/{gist_hash}/raw/{commit_hash}/{file}
Note: it is possible to get the latest version by omitting the {commit_hash} part, as shown below:
https://gist.githubusercontent.com/{user}/{gist_hash}/raw/{file}
February 2014: the raw url just changed.
See "Gist raw file URI change":
The raw host for all Gist files is changing immediately.
This change was made to further isolate user content from trusted GitHub applications.
The new host is
https://gist.githubusercontent.com.
Existing URIs will redirect to the new host.
Before it was https://gist.github.com/<username>/<gist-id>/raw/...
Now it is https://gist.githubusercontent.com/<username>/<gist-id>/raw/...
For instance:
https://gist.githubusercontent.com/VonC/9184693/raw/30d74d258442c7c65512eafab474568dd706c430/testNewGist
KrisWebDev adds in the comments:
If you want the last version of a Gist document, just remove the <commit>/ from URL
https://gist.githubusercontent.com/VonC/9184693/raw/testNewGist
One can simply use the github api.
https://api.github.com/gists/$GIST_ID
Reference: https://miguelpiedrafita.com/github-gists
Gitlab snippets provide short concise urls, are easy to create and goes well with the command line.
Sample example: Enable bash completion by patching /etc/bash.bashrc
sudo su -
(curl -s https://gitlab.com/snippets/21846/raw && echo) | patch -s /etc/bash.bashrc

Is procmail chrooted or limited in using linux commands?

im using procmail to forward emails to different folders in my Maildir.
I use these two lines to get the FROM and TO from the mail, which works pretty fine.
FROM=`formail -x"From:"`
TO=`formail -x"To:"`
These two commands return the whole line without the From: and To: prefix.
So i get something like:
Firstname Lastname <firstname.lastname#mail-domain.com>
Now i want to extract the email between < and >.
For this i pipe the variable FROM and TO grepping it like this.
FROM_PARSED=`echo $FROM | grep -o '[[:alnum:]+\.\_\-]*#[[:alnum:]+\.\_\-]*'`
TO_PARSED=`echo $TO | grep -o '[[:alnum:]+\.\_\-]*#[[:alnum:]+\.\_\-]*'`
But when i print FROM_PARSED into the procmail log by using LOG=FROM_PARSED, i get an empty string in FROM_PARSED and TO_PARSED.
But if i run these commands on my console, all works fine. I tried many other grepping methods, using grep, egrep, sed and even cut (cutting < and >). All working on console, but i use it in procmail it just returns nothing.
Is it possible that procmail is not allowed to use grep and sed commands? Something like a chroot?
I dont get any error logs in my procmail log. I just want to extract the valid email address from the FROM and TO line. Extracting with formail works, but parsing it with grep or sed fails, even if expression is correct.
Could somebody help? Maybe i need to setup procmail somehow.
Strange.
I added this to the users .procmailrc file
SHELL=/bin/bash
The users shell was set to /bin/false, which is correct because its a mail user, no ssh access at all.
You should properly quote "$FROM" and "$TO".
You will also need to prefix grep with LC_ALL=POSIX to ensure [:alnum:] will actually match the 26 well-known characters + 10 digits of the English alphabet.
You already solved this, but to answer your actual question, it is possible to run procmail in a chroot, but this is certainly not done by Procmail itself. Sendmail used to come with something called the Sendmail Restricted Shell (originally called rsh but renamed to remsh) which allowed system administrators to chroot the delivery process. But to summarize, this is a feature of the MTA, not of Procmail.

Getting the path of a remote rsync depot

I know if you run in rsync
rsync some.domain.com::
It will return me a list of the rsync depots. Is there any way of getting it to return the details of the depot, the path specifically.
Thanks
No, rsyncd is specifically designed not to reveal the physical path of the modules. Now, if you have shell access to the rsyncd server, you can read /etc/rsyncd.conf for that information.
(But, there may be ways to exploit rsyncd to reveal the path, if the use chroot setting is off. Don't quote me on that, though.)