I'm trying to extract TCP Stream flow based on TCP stream number in a pcap using TCPflow but the tool dump all the packets without numbering the packets with right TCP stream number.
.\tcpflow64.exe -o out -a -r example.pcap -T %N_%A-%a_%B-%b.txt -S enable_report=YES
No info about stream number in tcp flow page nor in the tool --help
https://www.mankier.com/1/tcpflow
Related
I have a system with arch linux running OVS. I also have a controller running in the same box. I have the following setup:-
ovs-vsctl set-controller br-int tcp:192.168.1.201:6633
I was hoping to use tshark( tshark 2.2.8) to capture the openflow using the following command:-
sudo tshark -i br-int -d tcp.port==6633,openflow -O openflow_v4
it dumps all the all the flows that is flowing in the system but no packetIn openflow messages. I did confirm packetIn message was received by the controller. ( pasting the last few lines:-)
EVENT ofp_event->EventOFPPacketIn
packet in 1237689849893337 b8:27:xx:xx:yy:yy:zz ff:ff:ff:ff:ff:ff:3
I also understand from the tshark document that by default it uses the port 6653 for openflow.
tshark -G decodes | grep -i openflow
tcp.port 6653 openflow
However I was in the impression that I can still look for openflow traffic by using the following capture command:-
https://wiki.wireshark.org/OpenFlow
tshark tcp port 6633
This also doesn't work as no events are captured though I can see the controller receiving lots of events..
would greatly appreciate any help here.
My guess would be that you're not listening on the correct interface. Try the following:
sudo tshark -i any -d tcp.port==6633,openflow -O openflow_v4
If that doesn't work, it's possible your controller and switch are not communicating using OpenFlow 1.3. To make sure you see everything, try:
sudo tshark -i any -d tcp.port==6633
Details. Unless there's something particular in your setup, packets from Open vSwitch to the controller and back do not go through the bridge. Since both ends of the communication are on the same host, packets are probably going through the loopback interface:
sudo tshark -i lo -d tcp.port==6633
I was able to reproduce your setup and issue to confirm my answer with Open vSwitch 2.5.2 and Floodlight (master branch). I can see packets passing through on the loopback interface with both tcpdump and tshark.
I'm doing research on bigdata. For that, I have developed a network with several nodes exchanging UDP unicast and multicast packets. There are UDP packets with 33792 bytes and Ack packets with 37 bytes. MTU is set to 1500. Everything works fine for a little, lets say 300 to 5000 packets exchanged. Then suddenly some machine receives the packet (I can see it with tcmpdump -i any -vvv -XX -e -s 64 > dump.txt 2>&1). But the application socket doesn't receive it (select doesn't wake up).
I'm using IPv4 sockets with TTL set to 1, i.e. Local network.
After nights trying to solve I end up setting:
sudo sysctl -w net.core.wmem_max=134217728
sudo sysctl -w net.core.rmem_max=134217728
sudo sysctl -w net.ipv4.udp_mem=1638400 1638400 1638400
sudo sysctl -w net.core.somaxconn=4096
sudo sysctl -w net.core.netdev_max_backlog=262144
sudo sysctl -w net.core.optmem_max=134217728
sudo sysctl -w net.ipv4.udp_rmem_min=65535
sudo sysctl -w net.ipv4.udp_wmem_min=65535
The client sockets set SO_SNDBUF to 134217728 (128 M), and server socket sets SO_RCVBUF to same value.
But looks like still haven't solved the problem. Any thoughts??? .... TIA
Actually it seems it solved the problem. Anyone wanting to in details (advantages/disadvantages/tradeoffs) the sysctl values I set is very welcome tough.
You have to join the multicast group to reliably receive multicast packets.
On UN*X this is done with something like
struct ip_mreq mreq;
setsockopt(s, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq));
The pitfall is that if something is not set up correctly (or does not work properly, e.g. a switch) you will be able to receive multicast traffic for some time, and then, out of a sudden, it stops. So when you receive packets you 'cannot' draw the conclusion "everything is ok".
Also: All the potentially transparent infrastructure in your network (e.g. level 2 switches == normal switches) need to support the IGMP version your OS is using.
Sorry for the vague title, but my issue is a bit complicated to explain.
I have written a "captive portal" for a WLAN access point in cherrypy, which is just a server that blocks MAC addresses from accessing the internet before they have registered at at certain page. For this purpose, I wrote some iptables rules that redirect all HTTP traffic to me
sudo iptables -t mangle -N internet
sudo iptables -t mangle -A PREROUTING -i $DEV_IN -p tcp -m tcp --dport 80 -j internet
sudo iptables -t mangle -A internet -j MARK --set-mark 99
sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp -m mark --mark 99 -m tcp --dport 80 -j DNAT --to-destination 10.0.0.1
(the specifics of this setup are not really important for my question, just note that an "internet" chain is created which redirects HTTP to port 80 on the access point)
At port 80 on the AP, a cherrypy server serves a static landing page with a "register" button that issues a POST request to http://10.0.0.1/agree . To process this request, I have created a method like this:
#cherrypy.expose
def agree(self, **kwargs):
#retrieve MAC address of client by checking ARP table
ip = cherrypy.request.remote.ip
mac = str(os.popen("arp -a " + str(ip) + " | awk '{ print $4 }' ").read())
mac = mac.rstrip('\r\n')
#add an iptables rule to whitelist the client, rmtrack to remove previous connection information
os.popen("sudo iptables -I internet 1 -t mangle -m mac --mac-source %s -j RETURN" %mac)
os.popen("sudo rmtrack %s" %ip)
return open('welcome.html')
So this method retrieves the client's MAC address from the arp table, then adds an iptables exception to remove that specific MAC from the "internet" chain that redirects traffic to the portal.
Now when I test this setup, something interesting happens. Adding the exception in iptables works - i.e. the client can now access web pages without getting redirected to me. The problem is that the initial request doesn't come through to my server , i.e. the page welcome.html is never opened - instead, right after the iptables and rmtrack calls are executed, the client tries to open the "agree" path on the page they requested before the redirect to my portal.
For example, if they hit "google.com" in the address bar, then got sent to my portal and agreed, they would now try to open http://google.com/agree . As a result, they get an error after a while. It appears that the iptables or the rmtrack call changes the request to go for the original destination while it is still being processed at my server, which doesn't make any sense to me. Consequently, it doesn't matter which static page I return or which redirects I make after those terminal commands have been issued - the return value of my function isn't used by the client.
How could I fix this problem? Every piece of useful information is appreciated.
Today I managed to solve my problem, so I'm gonna put the solution here although I kinda doubt that there's a lot of people running into the same problem.
Basically, all that was needed was an absolute-path redirect somewhere during the request processing on the captive portal server. For example, in my case, the form on the index page where you agreed to my T&C was calling action /agree . This meant that the client was left believing he was accessing those paths on his original destination server (eg google.com/agree).
Using the absolute-form 10.0.0.1/agree instead, the client will follow the correct redirect after the iptables call.
For testing purposes I am sending tcp messages to a local server as follows:
echo -e "some message" | netcat localhost 1234
With netcat installed using brew install netcat.
This works fine except for that this blocks for quite a long time (about a minute). I tried to use the options "-w 1" for specifying the timeout, but this does not change anything.
The process listening on the other end is a spring-xd tcp source.
Is there any other way of sending a tcp message that does not block as long?
I've not seen such a delay on linux; haven't tried on OS X (it comes with nc instead).
What is your stream definition? The default tcp source expects data to be terminated with CRLF - e.g telnet localhost 1234. You need a RAW decoder for netcat.
EDIT:
I just tested
xd:>stream create foo --definition "tcp --decoder=RAW | log" --deploy
with
$ echo "foo" | nc localhost 1234
and had no problems.
I need to discover the port 161 both UDP and TCP in big networks. And the results must have the output I chose below.
In order to discover TCP I use
nmap -T4 -sS -p T:161 -iL c:\input.txt -oN c:\output.txt --append-output –open
In order to discover UDP I use
nmap -T4 -sU -p 161 -iL c:\input.txt -oN c:\output.txt --append-output –open
I am looking for a command that will combine both of them. I need a list of both TCP and UPD results in one command, one result.
Is it possible? How?
Nmap allows you to combine scan types into a single scan, as long as you don't choose scan types that target the same protocols (e.g. -sST, which would request a TCP SYN and TCP Connect scan, an illegal combination). So your combined scan would be:
nmap -T4 -sSU -p 161 -iL c:\input.txt -oN c:\output.txt --append-output -–open
Unrelated note: If you have the disk space, I would highly recommend switching the -oN option for -oA or just adding -oX to get XML output. Lots of security tools can process this structured output and produce meaningful results. Plus, you don't have to worry when Nmap's screen output changes (which it does fairly regularly) and breaks your parsing scripts, since the XML is a much more stable and naturally extensible format.