IDS/IPS size of payload analysis - snort

I’ve been searching for a while in order to find out the typical/minimum size of payload analysis while configuring security devices such as IDS/IPS.
I know it is possible to configure within SNORT rules both Offset and Depth parameters. Sort speech: a SNORT rule configured with a 1 byte Offset and 7 bytes depth will analyze incoming packets from 1-7 bytes of payload + Header size.
I know depth parameter value varies from 1-65535 but i’d like to know what is the minimum size of bytes for an accurate traffic analysis.
As an example, if I receive a meterpreter payload which is roughly 700 bytes size, do I need to scan the whole packet in order to raise an alert?.
Thanks in advance.

Related

What value to use for Libopus encoder max_data_bytes field?

I am currently using libopus in order to encode some audio that I have.
When consulting the documentation for how to use the encoder, one of the arguments the encode function takes in is max_data_bytes, a opus_int32 that has the following documentation:
Size of the allocated memory for the output payload. May be used to impose an upper limit on the instant bitrate, but should not be used as the only bitrate control
Unfortunately, I wasn't able to get much out of this definition as to how to set the upper size and the relation of this argument to bitrate. I tried consulting some of the examples provided such as this or this but both have the argument defined as some constant without much information.
Could anyone help me understand the definition of this value, and what number I might be interested in using for it? Thank you!
Depends on encoder version and encoding parameters.
In 1.1.4 the encoder doesn't merge packets and the upper limit should be 1275 byte. For the decoder, if repacketizer is used, you could find some packet up to 3*1275.
Things could be changed in recent version, I'm quite sure that the repacketizer has been somehow merged in the encoder. Look into the RFC.
Just paste here some of my notes from a 1½ years ago...
//Max opus frame size if 1275 as from RFC6716.
//If sample <= 20ms opus_encode return always an one frame packet.
//If celt is used and sample is 40 or 60ms, two or three frames packet is generated as max celt frame size is 20ms
//in this very specific case, the max packet size is multiplied by 2 or 3 respectively

What's the practical limit for the data length of UDP packet?

This is wikipedia's explanation of the length field of the UDP header:
Length
A field that specifies the length in bytes of the UDP header
and UDP data. The minimum length is 8 bytes because that is the length
of the header. The field size sets a theoretical limit of 65,535 bytes
(8 byte header + 65,527 bytes of data) for a UDP datagram. The
practical limit for the data length which is imposed by the underlying
IPv4 protocol is 65,507 bytes (65,535 − 8 byte UDP header − 20 byte IP
header).
The practical limit for the data length should minus 20 byte IP header, why is that?
Take a good look at the explanation of the IP header at this link :
https://www.ietf.org/rfc/rfc791.txt
I quote :
Total Length: 16 bits
Total Length is the length of the datagram, measured in octets, including internet header and data. This field allows the length of a datagram to be up to 65,535 octets. Such long datagrams are impractical for most hosts and networks. All hosts must be prepared to accept datagrams of up to 576 octets (whether they arrive whole or in fragments). It is recommended that hosts only send datagrams larger than 576 octets if they have assurance that the destination is prepared to accept the larger datagrams.
The number 576 is selected to allow a reasonable sized data block to be transmitted in addition to the required header information. For example, this size allows a data block of 512 octets plus 64 header octets to fit in a datagram. The maximal internet header is 60 octets, and a typical internet header is 20 octets, allowing a margin for headers of higher level protocols.
So the maximum total length is 65535 but this includes the IP header itself.
Therefore you have an IP payload that can be 65535 - 20 = 65515.
But the payload of IP in your case is UDP and UDP has a header of its own which is 8 bytes. Hence you get to the theoretical limit of the payload of a UDP packet : 65,535 − 8 byte UDP header − 20 byte IP header
Note the use of theoretical instead of practical. The practical limit of a UDP packet takes into account the probability of fragmentation and thus considers the mtu of the network layer. The link above also has an interesting sentence containing the value 576. 576 - 20 - 8 = 548 which is not quite 534 but getting close. This might explain this practical limit.
Because UDP packets are encapsulated in IP packets which headers are 20 bytes. You can't send UDP packets without a encapsulated IP packet. Usually the actual limit is way less and it depends on the MTU of the routers between the two endpoints transmitting the UDP packet.
Because the IP header has to be (a) sent and (b) counted in the 16-bit length word. See RFC 791 #3.1.
However the real practical limit is generally accepted to be 534 bytes, to avoid fragmentation at the IP layer, which increases the risk of datagram loss.

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)

Is there an 'optimal' buffer size when using send()?

Let's say you're transferring a file of arbitrary length in chunks over TCP/IP:
looping...
read(buffer, LENGTH)
send(mysocket, buffer, LENGTH, flags)
My question is, what would be optimal value of LENGTH? Or does it not matter at all? I've seen everything from 256 bytes to 8192 bytes being used.
Depends what you mean by optimal. For optimal usage of the bandwidth, you want to maximize the packet size so send at least the network packet size (which on Ethernet is usually about 1500 bytes). If you are reading from disk 4096 or 8192 bytes would be a good value.
If your buffer size translates into packet size, then shorter buffers are better -- less to retransmit in event of a packet error.
ATM took this to the extreme with a 54-byte packet.
But depending upon your library, it might be doing some buffering of its own and setting its packet size independantly. YMMV.
If you are sending large amounts of data over a high latency connection, you can get better throughput with a larger send buffer. Here is a good explanation:
http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html

Converting from bandwidth to traffic gives different results depending on operators position?

This must be a stupid question, but nevertheless I find it curious:
Say I have a steady download of 128Kbps.
How much disk space is going to be consumed after a hour in Megabytes?
128 x 60 x 60 / 8 / 1024 = 56.25 MB
But
128 x 60 x 60 / 1000 /8 = 57.6 MB
So what is the correct way to calculate this?
Thanks!
In one calculation you're dividing by 1000, but in another you're dividing by 1024. There shouldn't be any surprise you get different numbers.
Officially, the International Electrotechnical Commission standards body has tried to push "kibibyte" as an alternative to "kilobyte" when you're talking about the 1024-based version. But if you use it, people will laugh at you.
Please remember that there is overhead in any transmission. There can be "dropped" packets etc. Also there is generally some upstream traffic as your PC acknoledges receipt of packets. Finally since packets can be received out of order, the packets themselves contain "extra" data to all the receiver to reconstruct the data in the proper order.
Ok, I found out an official explanation from Symantec on the matter:
http://seer.entsupport.symantec.com/docs/274171.htm
It seems the idea is to convert from bits to bytes as early as possible in calculation, and then the usual 1024 division comes in place.
I just hope it's a standard procedure, and not Symantec imposed one :).