snort rules for OS detection - nmap

i need to write snort rules for OS detection (Nmap) following packets:
ICMP echo (IE)
The IE test involves sending two ICMP echo request packets to the target. The first one has the IP DF bit set, a type-of-service (TOS) byte value of zero, a code of nine (even though it should be zero), the sequence number 295, a random IP ID and ICMP request identifier, and 120 bytes of 0x00 for the data payload.
The second ping query is similar, except a TOS of four (IP_TOS_RELIABILITY) is used, the code is zero, 150 bytes of data is sent, and the ICMP request ID and sequence numbers are incremented by one from the previous query values.
The results of both of these probes are combined into a IE line containing the R, DFI, T, TG, and CD tests. The R value is only true (Y) if both probes elicit responses. The T, and CD values are for the response to the first probe only, since they are highly unlikely to differ. DFI is a custom test for this special dual-probe ICMP case.
These ICMP probes follow immediately after the TCP sequence probes to ensure valid results of the shared IP ID sequence number test (see the section called “Shared IP ID sequence Boolean (SS)”).
I write following rules:
alert icmp any any -> any any (msg:"i1"; sid:1000001; icmp_seq:295; tos:0; dsize:120; content:"|00|"; fragbits:D; icode:9;)
alert icmp any any -> any any (msg:"i2"; sid:1000002; icmp_seq:296; tos:4; dsize:150; content:"|00|"; fragbits:D; icode:0;)
This rules are wrong. i have no idea how to correct them, i'll glad if someone can help me. Thanks in advance.

Related

I can not sent short messages by TCP protocol

I have a trouble to tune TCP client-server communication.
My current project has a client, running on PC (C#) and a server,
running on embedded Linux 4.1.22-ltsi.
Them use UDP communication to exchanging data.
The client and server work in blocking mode and
send short messages one to 2nd
(16, 60, 200 bytes etc.) that include either command or set of parameters.
The messages do note include any header with message length because
UDP is message oriented protocol. Its recvfrom() API returns number of received bytes.
For my server's program structure is important to get and process entire alone message.
The problem is raised when I try to implement TCP communication type instead of UDP.
The server's receive buffer (recv() TCP API) is 2048 bytes:
#define UDP_RX_BUF_SIZE 2048
numbytes = recv(fd_connect, rx_buffer, UDP_RX_BUF_SIZE, MSG_WAITALL/*BLOCKING_MODE*/);
So, the recv() API returns from waiting when rx_buffer is full, i.e after it receives
2048 bytes. It breaks all program approach. In other words, when client send 16 bytes command
to server and waits an answer from it, server's recv() keeps the message
"in stomach", until it will receive 2048 bytes.
I tried to fix it as below, without success:
On client side (C#) I set the socket parameter theSocket.NoDelay.
When I checked this on the sniffer I saw that client sends messages "as I want",
with requested length.
On server side I set TCP_NODELAY socket option to 1
int optval= 1;
setsockopt(fd,IPPROTO_TCP, TCP_NODELAY, &optval, sizeof(optval);
On server side (Linux) I checked socket options SO_SNDLOWAT/SO_RCVLOWAT and they are 1 byte each one.
Please see the attached sniffer's log picture. 10.0.0.10 is a client. 10.0.0.106 is a server. It is seen, that client activates PSH flag (push), informing the server side to move the incoming data to application immediately and do not fill a buffer.
Additional question: what is SSH encrypted packets that runs between the sides. I suppose that it is my Eclipse debugger on PC (running server application through the same Ethernet connection) sends them. Am I right?
So, my problem is how to cause `recv() API to return each short message (16, 60, 200 bytes etc.) instead of accumulating them until receiving buffer fills.
TCP is connection oriented and it also maintains the order in which packets are sent and received.
Having said that, in TCP client, you will receive the stream of bytes and not the individual udp message as in UDP. So you will need to send the packet length and marker as the initial bytes.
So client can first find the packet length and then read data till packet length is reached and then expect new packet length.
You can also check for library like netty, zmq to do this extra work

JMeter TCP Sampler doesn't close the socket after data is sent

I've just recently started using JMeter.
I'm trying to run a TCP sampler on one of my servers.
The TCP sampler is set to all default values, with my IP, port number and text to send.
The server receives the text and responds as expected.
However, once JMeter receives the response it doesn't close the connection; it just waits until I stop the test manually, at which point the server logs show the client has disconnected.
I also have a response assertion which looks for this string:
{"SERVER":[{"End":200}]}\r\n
The Assertion is set to apply to main sample and sub-samples, the response field to test is set to Text Response.
With the pattern matching rules set to Equals I get:
Device Server Sampler
Device Server Response Assertion : Test failed: text expected to equal /
****** received : {"SERVER":[{"End":200}]}[[[
]]]
****** comparison: {"SERVER":[{"End":200}]}[[[\r\n]]]
/
If I set pattern matching to Contains I get:
Device Server Sampler
Which I can only assume at this point is a pass??
But no matter how I try it JMeter never closes the socket, so when I stop the tests myself and View the results in a table the status is marked as Warning, even though the correct amount of bytes have been received, and the data is correct.
JMeter doesn't seem to like \r\n so I've run the same tests removing those from the strings on both sides, but the sockets still remain open until I stop the tests.
Got any ideas what the issue may be?
In the TCP Sampler I needed to set End of line(EOL) byte value to 10, which is the decimal byte value for \n

TCP/IP basics, offset, reassembly

I`m writing packet generator right now. Testing it with wireshark and VM. I have an exercise on my checklist to sent 3 packets in a row:
1. TCP on 80 port, with SYN=1 and MF=1 flags.
2. TCP on 135 port, with SYN=1 and MF=1 flags.
3. TCP on 80 port, with MF = 0 and offset = 24.
I`m sending all the packets with the same ID field on IP layer.So as I understand Wireshark should try to reassemble these packets.
But will it reassemble packets from different ports?And what should we get as final result?
All I get is 3 IPv4 packets.
http://cs625124.vk.me/v625124860/10bf5/BQFUbKT7zVs.jpg
Addition: I mentioned, that if I change offset of last TCP-packet to 16, than we got a bit different kind of traffic.:
We got one HTTP or continuos packet. And here is wrong checksum. I tried to copy correct checksum to the first TCP packet and then I got RST, so i think that WireShark interpreted SYN from 1-st packet:
http://s28.postimg.org/z3w7ibhjx/image.png
So could you please explain me, was the last result correct? I would appreciate any help. Sorry if it is something basic. It`s my first expirience of writing WinForm application and using Pcap.Net library too. Thanks in advance!Sorry for links, have no reputation(
First, a TCP session is defined by the tuple:
Side A's IP address.
Side A's Port.
Side B's IP address.
Side B's Port.
If you have packets with different tuples, they will not be part of the same TCP session.
You get a RST when the server closes the session.
It is likely the server doesn't like getting SYN packets from port 21 (FTP) to its port 80 (HTTP).

Why skb_buffer needs to be skipped by 20 bytes to read the transport buffer while the packet is input?

I am writing a network module in Linux, and I see that the tcp header can be extracted only after skipping 20 bytes from the skb buffer, even though the API is 'skb_transport_header'.
What is the reason behind it? Can some body please explain in detail? The same is not required for outgoing packets. I understand that while receiving the packets, the headers are removed as the packet flows from L1 to L5. However, when the packet is outgoing, the headers are added. How this makes a difference here?
/**For Input packet **/
struct tcphdr *tcp;
tcp = (struct tcphdr *)(skb_transport_header(skb)+20);
/** For Outgoing packet **/
struct tcphdr *tcp;
tcp = (struct tcphdr *)(skb_transport_header(skb));
It depends on where in the stack you process the packet. Just after receipt of the packet, the transport header offset won't yet have been set. Once you've gotten to the point where it's been determined that this packet is in fact destined to the local box, that should no longer be necessary. This happens for IPv4 in ip_local_deliver_finish(). (Note that tcp_hdr(), for example, assumes that the transport_header location is already set.)
This makes total sense (even though it can be hard to determine where things like this happen in the normal flow): As each layer is recognized and processed, the starting offset of the next layer is recorded in the sk_buff. The headers aren't actually removed, the skb "data" location is just adjusted to point beyond them. And the layer-specific location is similarly adjusted.
On output, it's a little more straightforward and is done in the opposite order: transport header will be created first. Then, the network header is prepended to that, etc.

recvfrom() only gets up to 2048 bytes from UDP socket

I have to call the function repeatedly to get all data, given that the len argument is set to 10240. But this results in blocking at last. How can I get all the data and safely return in a platform independent way?
BTW, I use netcat at the sender side:
cat ocr_pi.png | nc -u server 5555
Is this issue relative to nc's behavior? I didn't find any parameter to set UDP packet size(-O is for TCP).
Thanks.
UDP sends and receives data as messages. In the len argument, you tell recvfrom() the max message size you can receive, and then recvfrom() waits until a full message arrives, regardless of its size. UDP messages are self-contained. Unlike TCP, a UDP message cannot be partially sent/received. It is an all-or-nothing thing. If the size of the received message is greater than the len value you specify, the message is discarded and you get an error.
The only way recvfrom() blocks is if there is no message available to read. If you don't want to block, use select() (or pselect() or epoll or other platform equivalent) to specify a timeout to wait for a message to arrive, and then call recvfrom() only if there is actually something to read.