Snort Rule to Alert DNS that has ACK - snort

What way can i write a rule to alert me of a DNS that has an ACK when it shouldnt? Im quite confused on this.
This is what i see in wireshark Acknowledgment Number: 0x000001a4 [should be 0x00000000 because ACK flag is not set]
But i want a rule that will alert me.
This rule below isnt working for me.
alert tcp any any -> 192.168.10.2 53 (msg:"MALFORMED DNS QUERY"; flags: A; ack:0; sid:10501;)
The above wont show in my alert log. But if i remove flags: and ack: it will.

When the ACK flag is set the acknowledgment number will never be "0", so this rule will not function as is.
Without "ack:" the only check in the rule is for an ACK flag set (rule header aside). If you are running DNS over TCP you will see the ACK flag set as a normal part of the TCP conversation i.e. each endpoint acknowledging received TCP segments.
What you're seeing in wireshark:
Acknowledgment Number: 0x000001a4 [should be 0x00000000 because ACK flag is not set]
Might be part of the expert info telling you that the acknowledgment number is non-zero when it should be (for instance when a tcp connection is initiated the first packet should only have the SYN flag set.)
I'm really not sure what you are trying to accomplish here.

Related

TCP socket state become persist after changing IP address even configured keep-alive early

I met a problem about TCP socket keepalive.
TCP keep-alive is enabled and configured after the socket connection, and system has its own TCP keep-alive configuration.
'ss -to' can show the keep-alive information of the connection.
The network interface is a PPPOE device, if we ifup the interface, it will get a new ip address. And the old TCP connection will keep establish until keep-alive timeout.
But sometimes 'ss -to' shows that the tcp connection becomes 'persist', which will take long time (about 15 minutes) to close.
Following is the result of 'ss -to':
ESTAB 0 591 172.0.0.60:46402 10.184.20.2:4335 timer:(persist,1min26sec,14)
The source address is '172.0.0.60', but the network interface's actual address has been updated to '172.0.0.62'.
This is the correct result of 'ss -to':
ESTAB 0 0 172.0.0.62:46120 10.184.20.2:4335 timer:(keepalive,4.480ms,0)
I don't know why the "timer" is changed to 'persist', which makes keep-alive be disable.
In short: TCP keepalive is only relevant if the connection is idle, i.e. no data to send. If instead there are still data to send but sending is currently impossible due to missing ACK or a window of 0 then other timeouts are relevant. This is likely the problem in your case.
For the deeper details see The Cloudflare Blog: When TCP sockets refuse to die.

Kube-proxy or ELB "delaying" packets of HTTP requests

We're running a web API app on Kubernetes (1.9.3) in AWS (set with KOPS). The app is a Deployment and represented by a Service (type: LoadBalancer) which is actually an ELB (v1) on AWS.
This generally works - except that some packets (fragments of HTTP requests) are "delayed" somewhere between the client <-> app container. (In both HTTP and HTTPS which terminates on ELB).
From the node side:
( Note: Almost all packets on server-side arrive duplicated 3 times )
We use keep-alive so the tcp socket is open and requests arrive and return pretty fast. Then the problem happens:
first, a packet with only the headers arrives [PSH,ACK] (I see the headers in the payload with tcpdump).
an [ACK] is sent back by the container.
The tcp socket/stream is quiet for a very long time (up to 30s and more - but the interval is not consistent, we consider >1s as a problem ).
another [PSH, ACK] with the HTTP data arrives, and the request can finally be processed in the app.
From the client side:
I've run some traffic from my computer, recording it on the client side to see the other end of the problem, but not 100% sure it represents the real client side.
a [PSH,ASK] with the headers go out.
a couple of [ACK]s with parts of the payload start going out.
no response arrives for a few seconds (or more) and no more packets go out.
[ACK] marked as [TCP Window update] arrives.
a short pause again and [ACK]s start arriving and the session continues until the end of the payload.
This is only happening under load.
To my understanding, this is somewhere between the ELB and the Kube-Proxy, but I'm clueless and desperate for help.
This is the arguments Kube-Proxy runs with:
Commands: /bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-proxy.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-proxy --cluster-cidr=100.96.0.0/11 --conntrack-max-per-core=131072 --hostname-override=ip-10-176-111-91.ec2.internal --kubeconfig=/var/lib/kube-proxy/kubeconfig --master=https://api.internal.prd.k8s.local --oom-score-adj=-998 --resource-container="" --v=2 > /tmp/pipe 2>&1
And we use Calico as a CNI:
So far I've tried:
Using service.beta.kubernetes.io/aws-load-balancer-type: "nlb" - the issue remained.
(Playing around with ELB settings hoping something will do the trick ¯_(ツ)_/¯ )
Looking for errors in the Kube-Proxy, found rare occurrences of the following:
E0801 04:10:57.269475 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Endpoints: Get https://api.internal.prd.k8s.local/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp: lookup api.internal.prd.k8s.local on 10.176.0.2:53: no such host
...and...
E0801 04:09:48.075452 1 proxier.go:1667] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 7 failed
)
I0801 04:09:48.075496 1 proxier.go:1669] Closing local ports after iptables-restore failure
I couldn't find anything describing such issue and will appreciate any help. Ideas on how to continue and troubleshoot are welcome.
Best,
A

How to build forged ICMP "Destination Unreachable" Type 3 Code 4 packet

I have created forged destination unreachable ICMP with type 3 and code 4 (fragmentation needed and DF bit is set). My setup has Server, Client, and a switch between them. Ideally this ICMP gets generated by router/gateway but I'm generating this at client. I'm creating this ICMP using Scapy tool. Here is how I'm creating:
ip = IP()
icmp = ICMP()
# IP Packet sent to client
ip.dst = ip_server
ip.src = ip_client
ip.protocol = 1 #shows that ip header contains icmp as data
# icmp type 3 + code 4
icmp.type = 3
icmp.code = 4
mtu =1300
icmp.unused = mtu
#
# build original packet for ICMP ping request
#
ip_orig = IP()
ip_orig.src = ip_server
ip_orig.dst = ip_client
icmp_orig = TCP()
tcp_orig.sport = 50000
tcp_orig.dport = 50000
tcp_orig.seq= original sequence number
#
# send the packet
#
send (ip/icmp/ip_orig/tcp_orig)
Steps I'm following to demonstrate the effect of this ICMP:
1> Server and client are talking to each other using sockets
2> As soon as server accepts the connection, I'm giving a 60 seconds pause in the machine during which I disable all the TCP ACKs going out of client machine (because if server receives ACKs for the message it sent then it wouldn't respond to ICMP).
3> Server sends it first message to client but won't receive any ACKs and server keeps re-transmitting the message, meanwhile I inject an ICMP message as mentioned in the above scapy code: send (ip/icmp/ip_orig/tcp/orig). I'm reporting MTU 1300 in the icmp i'm sending.
4> Ideally Server should reduce it's MTU and sends message back to client with MTU size of 1300.
But Server keeps re-transmitting the message with MTU size 1500. Kindly help me with this.
Why is server not reducing its MTU? Am I doing something wrong in my demonstration? Any help would be greatly appreciated.
There are a few pointers I outlined in this answer and in its comments:
The specification requires that the original IP header that is encapsulated in the ICMP error message (i.e. ip_orig) is exactly identical to the one received. Therefore, setting just its source IP address and destination IP addresses (i.e. ip_orig.src and ip_orig.dst, respectively) is probably not enough.
The sequence number of the original TCP header that is encapsulated in the ICMP error message (i.e. tcp_orig.seq) should be set as well, since the specification requires that at least 8 bytes of the problematic packet's IP layer payload are included in the ICMP error message.
Verify that path MTU discovery is enabled and that the DF bit is set. You can enable path MTU discovery with sysctl -w net.ipv4.ip_no_pmtu_disc=0.
Verify that there isn't any firewall and/or iptables rule that blocks ICMP messages.

FIN,ACK after PSH,ACK

I'm trying to implement a communication between a legacy system and a Linux system but I constantly get one of the following scenarios:
(The legacy system is server, the Linux is client)
Function recv(2) returns 0 (the peer has performed an orderly shutdown.)
> SYN
< SYN, ACK
> ACK
< PSH, ACK (the data)
> FIN, ACK
< ACK
> RST
< FIN, ACK
> RST
> RST
Function connect(2) returns -1 (error)
> SYN
< RST, ACK
When the server have send its data, the client should answer with data, but instead I get a "FIN, ACK"
Why is it like this? How should I interpret this? I'm not that familiar with TCP at this level
When the server have send its data, the client should answer with data, but I instead get a "FIN, ACK" Why is it like this? How should I interpret this?
It could be that once the server has sent the data (line 4) the client closes the socket or terminates prematurely and the operating system closes its socket and sends FIN (line 5). The server replies to FIN with ACK but the client has ceased to exist already and its operating system responds with RST. (I would expect the client OS to silently ignore and discard any TCP segments arriving for a closed connection during the notorious TIME-WAIT state, but that doesn't happen for some reason.)
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination:
Some host TCP stacks may implement a half-duplex close sequence, as Linux or HP-UX do. If such a host actively closes a connection but still has not read all the incoming data the stack already received from the link, this host sends a RST instead of a FIN (Section 4.2.2.13 in RFC 1122). This allows a TCP application to be sure the remote application has read all the data the former sent—waiting the FIN from the remote side, when it actively closes the connection. However, the remote TCP stack cannot distinguish between a Connection Aborting RST and this Data Loss RST. Both cause the remote stack to throw away all the data it received, but that the application still didn't read
After FIN, PSH, ACK --> One transaction completed
Second request receiving but sending [RST] seq=140 win=0 len=0

snort and portscan loggin

I posted a question a couple of days ago about the portscan log, however this is a separate question that deals with the new portscan logs.
Time: 04/13-15:29:41.660134
event_id: 6042
x.x.x.x -> x.x.x.x(portscan) UDP Filtered Portscan
Priority Count: 0
Connection Count: 200
IP Count: 66
Scanner IP Range:x.x.x.x:x.x.x.x
Port/Proto Count: 32
Port/Proto Range: 137:17500
I am trying to determine 4 things from this log, source IP, destination IP, source port, destination port.
Some other options i would like, but as necessary, would be the type of portscan and the flags for this scan.
Again, thanks for any help that can be provided.
The protocol was UDP, so there are no flags available (that's a TCP thing). The log suggests (if I am reading it correctly) that 32 ports were tested, running a range from 137 to 17500, so pick 30 ports other than 137 and 17500 and that's what got scanned. To get more specific, you would need to find a way to deaggregate the information and break each alert into its own event and log them individually.