Replaying pcap on loopback - pcap

I have a set of pcap files containing UDP traffic from two hosts, and have to perform some analysis on this traffic on a regular basis.
Ideally, I would want to avoid having to frequently setup local interfaces with specific IPs and such to replay these files. I want to be able to simply replay them on my loopback interface, using tcprewrite to change the pcap.
Here is what it currently looks like:
# Remove mac addresses for loopback interface
# Remove VLAN tags
tcprewrite \
--enet-smac=00:00:00:00:00:00 \
--enet-dmac=00:00:00:00:00:00 \
--enet-vlan=del \
--infile="${INFILE}" \
--outfile="${OUTFILE}.tmp"
# Change source and destination IP to loopback
# Regenerate IP checksums
tcprewrite \
--srcipmap=0.0.0.0/0:127.0.0.1 \
--dstipmap=0.0.0.0/0:127.0.0.1 \
--fixcsum \
--infile="${OUTFILE}.tmp" \
--outfile="${OUTFILE}"
It seems to almost work. I can then simply replay these files on my loopback using tcpreplay and I see the packets using tcpdump on lo. Still, it seems that any regular userspace socket does not see this traffic on the loopback.
From my understanding, it seems to be related to the way layer 2 is handled on the loopback interface on linux. It would seem I need to rewrite the layer 2 headers (DLT) from plain ethernet to the null protocol used by BSD loopbacks.
Anyone having experience on replaying UDP traffic captured on ethernet to the loopback interface would be greatly appreciated. I cannot figure out how, or whether this is at all possible with pcap/tcprewrite.
I tried to follow https://www.tcpdump.org/linktypes.html and force a DLT header type of 0 (DLT_NULL) and content of 2 (IPv4) but with no success:
tcprewrite \
--enet-smac=00:00:00:00:00:00 \
--enet-dmac=00:00:00:00:00:00 \
--enet-vlan=del \
--dlt=user \
--user-dlt=0 \
--user-dlink=02,00,00,00 \
--infile="${INFILE}" \
--outfile="${OUTFILE}.tmp"
Fatal Error in tcpedit.c:tcpedit_packet() line 135:
From plugins/dlt_null/null.c:dlt_null_encode() line 201:
DLT_NULL and DLT_LOOP plugins do not support packet encoding

Related

Ping to all hosts works in spite of enabling firewall rule on SDN Floodlight Controller

I am running Floodlight SDN controller remotely and have a mininet topology with 2 switches and 2 hosts. In spite of enabling the firewall rule using REST API [curl command], I am able to ping all the hosts.
Mininet Topo-
sudo mn --topo=linear,2 --mac --controller=remote,ip=192.168.56.107 --switch=ovsk,protocols=OpenFlow13
Floodlight Controller-
sdn#sdn-controllers:~/floodlight$ sudo java -jar target/floodlight.jar
REST API enabling Firewall-
sh curl http://192.168.56.107:8080/wm/firewall/module/enable/json -X PUT -d ''
Pingall works even after enabling firewall rule-
Why is the traffic not being dropped? What am I missing out on?
The below command just enables the firewall and does not enable any rule in it to control packet flow by default. Assume the controller runs on localhost.
sh curl http://localhost:8080/wm/firewall/module/enable/json -X
PUT -d ''
You can check the firewall status using below command and verify whether the firewall is really enabled :
sh curl http://localhost:8080/wm/firewall/module/status/json
By default firewall denies all traffic unless an explicit ALLOW rule is created.
You may need to check the list of existing rules by querying /wm/firewall/rules/json to see if any ALLOW rules exist somehow in your network topology.
You can add rule at switch of interest as below where the switch id of interest should be as per your topology. Lets consider switch1's id is 00:00:00:00:00:00:00:01. The below command adds an ALLOW rule for all flows to pass through switch 00:00:00:00:00:00:00:01.
sh curl POST -d '{"switchid : "00:00:00:00:00:00:00:01"}'
http://localhost:8080/wm/firewall/rules/json
The existence of above rule in switch shall allow ping between hosts connected to switch1 only.
Let's consider the h1 ip address is 10.0.1.1 and that of h2 is 10.0.1.2.
The below command shall add an ALLOW rule for all flows between host 10.0.1.1 and host 10.0.1.2. Note that not specifying action implies ALLOW rule.
curl -X POST -d '{"src-ip": "10.0.1.1/32", "dst-ip": "10.0.1.2/32"}'
http://localhost:8080/wm/firewall/rules/json
curl -X POST -d '{"src-ip": "10.0.1.2/32", "dst-ip": "10.0.1.1/32"}'
http://localhost:8080/wm/firewall/rules/json
The existence of above rule shall allow ping between mentioned hosts
To block traffic between hosts, you may need to add the DENY rule as below using host IP address of your interest as per your network topology.
sh curl -X POST -d '{"src-ip" : "10.0.1.1/32", "dst-ip":
"10.0.1.2/32", "nw-proto":"ICMP", "action": "DENY" }'
http://localhost::8080/wm/firewall/rules/json
The existence of above rule shall block ping between the mentioned hosts - Now the pingall command shall display output such that ping between host 10.0.1.1(h1) and 10.0.1.2(h2) is not successful. In such a case, the below command shall also show ping is not happening between h1 and h2
h1 ping h2

How to exclude port number from RSS hashing for tcp4 with ixgbe

In the README for ixgbe driver there is section about configuring RSS hashing algorithm:
-N --config-nfc
Configures the receive network flow classification.
rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6
m|v|t|s|d|f|n|r...
To exclude UDP port numbers from RSS hashing run:
ethtool -N ethX rx-flow-hash udp4 sd
For excluding port from hashing algorithm for udp4 all working well. But when I try to make same for tcp4, it fail:
~# ethtool -N eth2 rx-flow-hash tcp4 sd
Cannot change RX network flow hashing options: Invalid argument
What I am doing wrong?
I have seen that error once, when I forgot to bring up the interface before running the ethtool command. So run:
ifconfig eth2 up
Then check to see if that solves the problem. I have seen a couple of cases where the problem was not resolved this way, but I recommend trying it first and then run the command again to see if that solves your problem.

Block facebook.com using openwrt router

I am using OpenWRT router. I need to block a URL or multiple URLs (Not IP) for specific time. for example, I want to block facebook.com so that clients of this router cant access the website. firewall rules should have the option to do that but I dont know how to do that.
Here is one way to block by domain name rather than by IP address.
The main reason of why you need such a complicated method is that each domain name (e.g. facebook.com) may be resolved as different IP address at any given time. So, we need to keep a list of resolved IP addresses and add iptables rules based on this list.
First, you should enable logging in dnsmasq config:
uci set dhcp.#dnsmasq[0].logqueries=1
uci commit dhcp
/etc/init.d/dnsmasq restart
This will give you log entries like:
daemon.info dnsmasq[2066]: reply facebook.com is 31.13.72.36
Now, you just have to constantly parse syslog and add corresponding iptables rules like this (note that you most likely need a more versatile script and ipset for better performance):
logread -f | awk '/facebook.com is .*/{print $11}' | while read IP; do iptables -I OUTPUT -d $IP -j DROP; done

Entities disappears when Platform reboots

We have a problem with entities in OrionCB. Each time platform testbed is out of service, entities we have created before, disappears.
[root#orioncb ~]# curl localhost:1026/ngsi10/contextEntities/finesce_meteo -s -S --header 'Content-Type: application/xml' | xmllint --format -
curl: (7) couldn't connect to host
-:1: parser error : Document is empty
^
-:1: parser error : Start tag expected, '<' not found
This is an exmaple of the output when we are trying to list "finesce_meteo" entity.
Regards,
Ismael
With the information in your question I cannot be sure (or not sure) that the entities have disapeared. The information in your questions lead to a different problem.
Note the curl: (7) couldn't connect to host message. That means that the client cannot reach the port 1026 of the host. The most probable causes of this problem are:
Orion Context Broker is not started in that host
Orion Context Broker is started in that host, but in a port different that 1026
Something in the host (e.g a firewall or security group) is blocking the incoming connection
Something in the client (e.g a firewall) is blocking the outcoming connection
There is some other network issue is causing the connection problem.

Perl Most effective way for scanning for a particular web server http banner?

So basically I'm trying to scan web servers that run for example version apache 2.2.4 on their web server, what's the best way of doing this?
Scan for IP's range from blah blah to blah blah, with port 80 open + web server enabled then just make a script that loads ips and checks to see if they have the server banner i want.
Or what's an alternative faster way?
Basically I'm trying to make a script like ShodanHQ.
I'm trying to get a large amount of web servers running a certain version, can anybody give me a direction, thanks hope i was clear.
For doing Internet-wide surveys like Shodan or Scans.io, you need very-high-bandwidth access, legal approval (or at least a blind eye turned) from your ISP, and likely an asynchronous scanner like Zmap or masscan. Nmap is a decent alternative with the --min-rate argument. Anything using the default TCP stack on your OS (e.g. curl, netcat, or Perl solutions) will not be able to keep up with the high packet volume and number of targets required.
If, however, you want to scan a smaller network (say a /16 with 65K addresses), then Nmap is up to the job, requires less setup than the asynchronous scanners (since they require firewall settings to prevent the native TCP stack from responding to returned probes), and is widely available. You could get reasonable performance with this command:
sudo nmap -v -T5 -PS80 -p80 -sS --script http-server-header -oA scan-results-%D%T 10.10.0.0/16
This breaks down to:
-v - verbose output
-T5 - Fastest timing options. This may be too much for some networks; try -T4 if you suspect lost results.
-PS80 - Only consider hosts that respond on port 80 (open or closed).
-p80 - Scan port 80 on alive hosts
-sS - Use Nmap's half-open SYN scan, which has the best timing performance
--script http-server-header - This script will grab the Server header from a basic GET request. Alternatively you could use http-headers to get all headers, or use -sV --version-light to do basic version detection from probe responses.
-oA scan-results-%D%T - Output 3 formats into separate timestamped files. You can process results with one of the many tools that imports Nmap XML output.
You could use curl and sed:
curl -sI 'http://192.0.2.1' | sed -n 's/^Server:[[:blank:]]*//p'
Call it from perl with:
perl -e '$server=`curl -sI 'http://192.0.2.1' | sed -n 's/^Server:[[:blank:]]*//p'`; print $server'
The -I option in curl prints the http headers using a HEAD request.