tshark doesn't always print source ip - tshark

How can i get the tcp payload of packets with tshark, and also get the source IP that sent these packets?
This command works for most packets, but some packets are still printed WITHOUT a source IP (Why?) :
tshark -Y "tcp.dstport == 80" -T fields -d tcp.port==80,echo -e echo.data -e ip.src
*To test my command, run it and then browse to http://stackoverflow.com. Notice that usually the data chunks ("47:45:54:20:2f:61:64:73:...") have an IP after them, but not always.

I found the problem:
The packets with a missing source IP were IPv6, but my original command only prints IPv4.
This works:
tshark -Y "tcp.dstport == 80" -T fields -d tcp.port==80,echo -e echo.data -e ip.src -e ipv6.src

Related

db_nmap metasploit using hosts in postgres database

I am using metasploit and attempting to run a db_nmap against all the hosts I imported from an nmap run that I saved into a .xml file. So all the hosts are in my metasploit postgres database as verified when I run the hosts command. However I am unsure how I can run db_nmap against all these hosts.
The typical command I use for a single IP is:
db_nmap -sS -Pn -A --script vuln 192.0.0.1
The command I tried to use for all IPs in my database:
db_nmap -sS -Pn -A --script vuln hosts
I also tried
db_nmap -sS -Pn -A --script vuln hosts -c
I am also currently running this as a hackaround but so far it hasn't outputted anything: db_nmap -sS -Pn -A --script vuln -i /home/myuser/targets.txt
I cannot find the documentation I need so I am hoping someone can help me out here.
Thank you!
Try this:
host -o hostcsv
cat hostcsv | awk -F"," '{print $1}' | tr -d '"' | sort -u > host.txt
db_nmap --iL host.txt

Shell Script how to pass an argument with spaces inside a variable

I'm making a script to synchronize directories with rsync over ssh. I come into trouble when I want to define a custom port. Suppose a normal working script would have a syntax:
#! /bin/sh
rval=2222
port="ssh -p $rval"
rsync --progress -av -e "$port" sflash#192.168.10.107:/home/sflash/Documents/tmp/tcopy/ /home/sflash/Documents/tmp/tcopy
the syntax when disclosing a custom port is -e "ssh -p 2222". However, if I want to use a variable in this case like:
#! /bin/sh
rval=2222
port="-e \"ssh -p $rval\""
rsync --progress -av $port sflash#192.168.10.107:/home/sflash/Documents/tmp/tcopy/ /home/sflash/Documents/tmp/tcopy
This will not work likely due to some sort of interaction with IFS. I can completely avoid this scenario if I introduce an if statement to check if port is defined, but I am curious on the exact reason why this fails and if a solution exists to forcefully implement this method.
EDIT: sorry I am restricted to just posix shell
You haven't actually provided enough detail to be certain, but I suspect you are hitting a common misconception.
When you do:
rval=2222
rsync --progress -av -e "ssh -p $rval" src dst
rsync is invoked with 6 arguments: --progress, -av, -e, ssh -p 2222, src, and dst.
On the other hand, when you do:
port="-e \"ssh -p $rval\""
rsync --progress -av $port src dst
rsync is invoked with 8 arguments: --progress, -av, -e, "ssh, -p, 2222", src, and dst.
You do not want the double quotes to be passed to rsync, and you do not want the ssh -p 2222 to be split up into 3 arguments. One (terrible) way to do what you want is to use eval. But it seems what you really want is:
rval=2222
port="ssh -p $rval"
rsync --progress -av ${port:+-e "$port"} src dst
Now, if port is defined and not the empty string, rsync will be invoked with the additional arguments -e and ssh -p 2222 (as desired), and if port is undefined or empty, neither the -e nor the $port argument will be used.
Note that this is a case where you must not use double quotes around ${port:+-e "$port"}. If you do so, then an empty string would be passed as an argument when $port is the empty string. When $port is not the empty string, it would pass a single argument -e ssh -p 2222 rather than splitting into 2 arguments -e and ssh -p 2222.

tshark packet capture filter by request url

I am trying to only capture packets that contain requests to a certain API endpoint so tried to filter using the following:
tshark -i 2 -f 'port 80' -T pdml http.request.uri contains "/google/"
However I keep getting the following error:
tshark: A capture filter was specified both with "-f" and with additional
command-line arguments.
Tried removing the -f, but that did not help either. Any suggestions?
eg url: https://testAPI.com/termsearch/google/application
Your tshark command is incorrect. To specify a Wireshark display filter, you need to use the -Y option.
Windows:
tshark -i 2 -T pdml -Y "http.request.uri contains \"/google/\""
*nix:
tshark -i 2 -T pdml -Y 'http.request.uri contains "/google/"'

Rotating per packets receiving by TCPDUMP

How can I use 'tcpdump' command to capture and save each received packets to separate files (having rotatation per packet without losing any packets).
How about saving dump to a file and then splitting that to separate files?
$ sudo tcpdump -c 10 -w mycap.pcap
tcpdump: data link type PKTAP
tcpdump: listening on pktap, link-type PKTAP (Packet Tap), capture size 65535 bytes
10 packets captured
you'll need to have wireshark installed for this to work (e.g. with brew install wireshark on Mac or apt-get on Ubuntu)
$ editcap -c 1 mycap.pcap output.pcap
10 packets captured -> 10 files created
$ ls -la output* | wc -l
10

How to set up cron using curl command?

After apache rebuilt my cron jobs stopped working.
I used the following command:
wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
Now my DC support suggests me to change the wget method to curl. What would be the correct value in this case?
-O - is equivalent to curl's default behavior, so that's easy.
-q is curl's -s (or --silent)
--retry N will substitute for wget's -t N
All in all:
curl -s --retry 1 http://example.com/cgi-bin/loki/autobonus.pl
try run change with the full path of wget
/usr/bin/wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
you can find the full path with:
which wget
and more, check if you can reach the destination domain with ping or other methods:
ping example.com
Update:
based on the comments, seems to be caused by the line in /etc/hosts:
127.0.0.1 example.com #change example.com to the real domain
It seems that you have restricted options in terms that on the server where the cron should run you have the domain pinned to 127.0.0.1 but the virtual host configuration does not work with that.
What you can do is to let wget connect by IP but send the Host header so that the virtual host matching would work:
wget -O - -q -t 1 --header 'Host: example.com' http://xx.xx.35.162/cgi-bin/loki/autobonus.pl
Update
Also probably you don't need to run this over the web server, so why not just run:
perl /path/to/your/script/autobonus.pl