Reduce / Limit number of alerts occurring from Snort Rule Trigger (Syn Flood) - snort

So I have a snort rule that detects syn flood attacks that looks like this:
alert tcp any any -> $HOME_NET 80 (msg:”SYN Flood - SSH"; flags:S;
flow: stateless; detection_filter: track by_dst, count 40, seconds 10;
GID:1; sid:10000002; rev:001; classtype:attempted-dos;)
The problem is, when I trigger it using tcpreplay (With a Ddos.pcapng file):
sudo tcpreplay -i interface /home/Practicak/DDoS.pcapng
When listening on my VM1, and after running the TCP replay, I get a lot of alerts.. E.G. 100s of Syn Flood Detected alerts.
How can I limit this so that I only get few / 1 alert for each Syn Flood that is initiated? I.E. using the TCPReplay with the pcap file.. & is this good practice to display less alerts?
Thanks

#Liam,
The creation of a threshold would be one answer.
Ref: http://manual-snort-org.s3-website-us-east-1.amazonaws.com/node35.html
Another would be to do aggregating in the primary data store in which your alert logging is feeding into, for example Elasticsearch or Splunk.

Related

Advantage of multiple socket connections

I keep hearing people say, to get a better throughput you create multiple socket connection.
But my understanding is that however many tcp sockets you open between two end points. the ip layer is still one. So not sure where this additional throughput comes from
The additional throughput comes from increasing the amount of data sent in the first couple of round trip times (RTTs). TCP can send only IW packets the first round trip time (RTT). The amount is then doubled each RTT (slow start). If you open 4 connections you can send 4 * IW packets the first RTT. The throughput is quadrupled.
Lets say that a client requests a file that requires IW+1 packets. Opening two connections can complete the sending in one RTT, rather than two RTTS.
HOWEVER, this comes at a price. The initial packets are sent as a burst, which can cause severe congestion and packet loss.

concerned about perl IO::Socket losing UDP packets

I have a small UDP perl service that receives syslog data, fiddles with it, and sends it on it's way (over UDP) back to a syslog server that is running also on localhost. It works really well, but I was concerned that it might have been losing syslog events so tested it
Basically I pushed 100 "this is a test $NUM++" messages in, and sometimes 100 would come out - but once 99 came out (as measured by tcpdump running on the host running the perl script). This is on our production system where it's handling 500-1500 syslog msg/sec. As usual it works perfectly when it's only got test traffic, but under load I'm not sure.
tcpdump shows the 100 events coming in over eth0, but tcpdump showed the 99 coming out over lo. So maybe one never made it into the '$rcvSock', or maybe one got lost going out over lo via '$sndSock'
Basically the perl code is like below. However, the "#fiddling" bit does involve some pauses for DNS lookups, so there is some "read->block->write" going on. I don't think "Listen" is supported under UDP, so I can't be sure if the perl script is blocking-and-dropping the next receive while it's doing the "fiddling"?
Can anyone shed any light on where the loss could be occurring and how to get past it?
$rcvSock = IO::Socket::INET->new(
LocalAddr => $hn,
LocalPort => $rcvPort,
Timeout => $timeout,
Proto => 'udp'
);
$sndSock = IO::Socket::INET->new(
PeerAddr => $syslogSrv,
PeerPort =>$syslogPort,
Timeout => $timeout,
Proto => 'udp',
Blocking => 0
);
while (1) {
$now=time;
$rcvSock->recv($input,2560);
$remote_addr=$rcvSock->peerhost();
#fiddling occurs
$sndSock->send("$input");
}
Thank you the SO_RCVBUF did the trick.
What is happening is that I am pushing in (say) 1000 syslog records/packets per sec, but the DNS queries I do pause the processing by 1sec/record. So this means after processing ONE record, there are now 999 records to process. After two seconds there's 1998. This isn't looking good...
Those packets can be queued by the OS according to SO_RCVBUF, which is by default (on Redhat) 212992 bytes. So assuming an average of 400 bytes per syslog record, that's a maximum of ~530 records queued up before the kernel starts dropping new packets. So I can increase the SO_RCVBUF 10 even 100-fold but it won't get around the fundamental issue of that big pause. However, in reality I'm talking about peak rates: there are moments when the records/sec drop right down, and a lot of syslog records don't require DNS lookups (ie I skip them). Also by caching the heck out of those DNS lookups, I can minimize the amount of time they are done, so 1000/sec could be 101/sec involving DNS, which in turn could be 99% cachable, leading to only 2-5/sec that need DNS lookups - and at that level a healthy cache will get you through the peak load issues
I am not a programmer, so doing this properly with input queues, asynchronous DNS lookups, etc are beyond me. But I do know iptables... So I'm intending to run several of these on different ports, and use iptables to round-robin-deliver incoming packets onto them, giving them async functionality without me needing to write a single line of code. That should solve this for the load levels I need to worry about :-)
Thanks!
Try to increase SO_RCVBUF
IO::Socket::INET->setsockopt(SOL_SOCKET, SO_RCVBUF, ...)
sar is very helpful for purpose of Network Statistic investigation.
idgmerr/s The number of received UDP datagrams per second that could
not be delivered for reasons other than the lack of an application at
the destination port [udpInErrors].
$ sar -n UDP 5
08:18:12 AM idgm/s odgm/s noport/s idgmerr/s
08:18:15 AM 121.33 121.33 7.67 0.00

how can I transfer large data over tcp socket

how can I transfer large data without splitting. Am using tcp socket. Its for a game. I cant use udp and there might be 1200 values in an array. Am sending array in json format. But the server receiving it like splitted.
Also is there any option to send http request like tcp? I need the response in order. Also it should be faster.
Thanks,
You can't.
HTTP may chunk it
TCP will segment it
IP will packetize it
routers will fragment it ...
and TCP will reassemble it all at the other end.
There isn't a problem here to solve.
You do not have much control over splitting packets/datagrams. The network decides about this.
In the case of IP, you have the DF (don't fragment) flag, but I doubt it will be of much help here. If you are communicating over Ethernet, then 1200 element array may not fit into an Ethernet frame (payload size is up to the MTU of 1500 octets).
Why does your application depend on the fact that the whole data must arrive in a single unit, and not in a single connection (comprised potentially of multiple units)?
how can I transfer large data without splitting.
I'm interpreting the above to be roughly equivalent to "how can I transfer my data across a TCP connection using as few TCP packets as possible". As others have noted, there is no way to guarantee that your data will be placed into a single TCP packet -- but you can do some things to make it more likely. Here are some things I would do:
Keep a single TCP connection open. (HTTP traditionally opens a separate TCP connection for each request, but for low-latency you can't afford to do that. Instead you need to open a single TCP connection, keep it open, and continue sending/receiving data on it for as long as necessary).
Reduce the amount of data you need to send. (i.e. are there things that you are sending that the receiving program already knows? If so, don't send them)
Reduce the number of bytes you need to send. (The easiest way to do this is to zlib-compress your message-data before you send it, and have the receiving program decompress the message after receiving it. This can give you a size-reduction of 50-90%, depending on the content of your data)
Turn off Nagle's algorithm on your TCP socket. That will reduce latency by 200mS and discourage the TCP stack from playing unnecessary games with your data.
Send each data packet with a single send() call (if that means manually copying all of the data items into a separate memory buffer before calling send(), then so be it).
Note that even after you do all of the above, the TCP layer will still sometimes spread your messages across multiple packets, etc -- that's just the way TCP works. And even if your local TCP stack never did that, the receiving computer's TCP stack would still sometimes merge the data from consecutive TCP packets together inside its receive buffer. So the receiving program is always going to "receive it like splitted" sometimes, because TCP is a stream-based protocol and does not maintain message boundaries. (If you want message boundaries, you'll have to do your own framing -- the easiest way is usually to send a fixed-size (e.g. 1, 2, or 4-byte) integer byte-count field before each message, so the receiver knows how many bytes it needs to read in before it has a full message to parse)
Consider the idea that the issue may be else where or that you may be sending too much unnecessary data. In example with PHP there is the isset() function. If you're creating an internet based turn based game you don't (need to send all 1,200 variables back and forth every single time. Just send what changed and when the other player receives that data only change the variables are are set.

General overhead of creating a TCP connection

I'd like to know the general cost of creating a new connection, compared to UDP. I know TCP requires an initial exchange of packets (the 3 way handshake). What would be other costs? For instance is there some sort of magic in the kernel needed for setting up buffers etc?
The reason I'm asking is I can keep an existing connection open and reuse it as needed. However if there is little overhead reconnecting it would reduce complexity.
Once a UDP packet's been dumped onto the wire, the UDP protocol stack is free to completely forget about it. With TCP, there's at bare minimum the connection details (source/dest port and source/dest IP), the sequence number, the window size for the connection etc... It's not a huge amount of data, but adds up quickly on a busy server with many connections.
And then there's the 3-way handshake as well. Some braindead (and/or malicious systems) can abuse the process (look up 'syn flood'), or just drop the connection on their end, leaving your system waiting for a response or close notice that'll never come. The plus side is that with TCP the system will do its best to make sure the packet gets where it has to. With UDP, there's no guarantees at all.
Compared to the latency of the packet exchange, all other costs such as kernel setup times are insignificant.
OPTION 1: The general cost of creating a TCP connection are:
Create socket connection
Send data
Tear down socket connection
Step 1: Requires an exchange of packets, so it's delayed by to & from network latency plus the destination server's service time. No significant CPU usage on either box is involved.
Step 2: Depends on the size of the message.
Step 3: IIRC, just sends a 'closing now' packet, w/ no wait for destination ack, so no latency involved.
OPTION 2: Costs of UDP:*
Create UDP object
Send data
Close UDP object
Step 1: Requires minimal setup, no latency worries, very fast.
Step 2: BE CAREFUL OF SIZE, there is no retransmit in UDP since it doesn't care if the packet was received by anyone or not. I've heard that the larger the message, the greater probability of data being received corrupted, and that a rule of thumb is that you'll lose a certain percentage of messages over 20 MB.
Step 3: Minimal work, minimal time.
OPTION 3: Use ZeroMQ Instead
You're comparing TCP to UDP with a goal of reducing reconnection time. THERE IS A NICE COMPROMISE: ZeroMQ sockets.
ZMQ allows you to set up a publishing socket where you don't care if anyone is listening (like UDP), and have multiple listeners on that socket. This is NOT a UDP socket - it's an alternative to both of these protocols.
See: ZeroMQ.org for details.
It's very high speed and fault tolerant, and is in increasing use in the financial industry for those reasons.

Benefits of "Don't Fragment" on TCP Packets?

One of our customers is having trouble submitting data from our application (on their PC) to a server (different geographical location). When sending packets under 1100 bytes everything works fine, but above this we see TCP retransmitting the packet every few seconds and getting no response. The packets we are using for testing are about 1400 bytes (but less than 1472). I can send an ICMP ping to www.google.com that is 1472 bytes and get a response (so it's not their router/first few hops).
I found that our application sets the DF flag for these packets, and I believe a router along the way to the server has an MTU less than/equal to 1100 and dropping the packet.
This affects 1 client in 5000, but since everybody's routes will be different this is expected.
The data is a SOAP envelope and we expect a SOAP response back. I can't justify WHY we do it, the code to do this was written by a previous developer.
So... Are there any benefits OR justification to setting the DF flag on TCP packets for application data?
I can think of reasons it is needed for network diagnostics applications but not in our situation (we want the data to get to the endpoint, fragmented or not). One of our sysadmins said that it might have something to do with us using SSL, but as far as I know SSL is like a stream and regardless of fragmentation, as long as the stream is rebuilt at the end, there's no problem.
If there's no good justification I will be changing the behaviour of our application.
Thanks in advance.
The DF flag is typically set on IP packets carrying TCP segments.
This is because a TCP connection can dynamically change its segment size to match the path MTU, and better overall performance is achieved when the TCP segments are each carried in one IP packet.
So TCP packets have the DF flag set, which should cause an ICMP Fragmentation Needed packet to be returned if an intermediate router has to discard a packet because it's too large. The sending TCP will then reduce its estimate of the connection's Path MTU (Maximum Transmission Unit) and re-send in smaller segments. If DF wasn't set, the sending TCP would never know that it was sending segments that are too large. This process is called PMTU-D ("Path MTU Discovery").
If the ICMP Fragmentation Needed packets aren't getting through, then you're dealing with a broken network. Ideally the first step would be to identify the misconfigured device and have it corrected; however, if that doesn't work out then you add a configuration knob to your application that tells it to set the TCP_MAXSEG socket option with setsockopt(). (A typical example of a misconfigured device is a router or firewall that's been configured by an inexperienced network administrator to drop all ICMP, not realising that Fragmentation Needed packets are required by TCP PMTU-D).
The operation of Path-MTU discovery is described in RFC 1191, https://www.rfc-editor.org/rfc/rfc1191.
It is better for TCP to discover the Path-MTU than to have every packet over a certain size fragmented into two pieces (typically one large and one small).
Apparently, some protocols like NFS benefit from avoiding fragmentation (link text). However, you're right in that you typically shouldn't be requesting DF unless you really require it.