Identifying DNS packets - sockets

When looking a packet byte code, how would you identify a dns packet.
The IP header's protocol field would tell that a UDP frame follows, but inside the UDP frame no protocol field exists to specify what comes next and, from what I can see, there is nothing inside the frame that would uniquely identify it as a dns packet.

Other than it being on port 53, there's a few things you can look out for which might give a hint that you're looking at DNS traffic.
I will refer to the field names used in §4.1 of RFC 1035 a lot here:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ID |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|QR| Opcode |AA|TC|RD|RA| Z | RCODE |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QDCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ANCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| NSCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ARCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
As you can see above the header is 12 bytes long - a 2 byte ID, 2 bytes of flags, and 4 x 2 bytes of counts.
In any DNS packet the QDCOUNT field will be exactly one (0x0001). Technically other values are allowed by the protocol, but in practise they are never used.
In a query (QR == 0) the ANCOUNT and NSCOUNT values will be exactly zero (0x0000), and the ARCOUNT will typically be 0, 1, or 2, depending on whether EDNS0 (RFC 2671)and TSIG (RFC 2845) are being used. RCODE will also be zero in a query.
Responses are somewhat harder to identify, unless you're observing both sides of the conversation and can correlate each response to the query that triggered it.
Obviously the QR bit will be set, and as above the QDCOUNT should still be one. The remaining counters however will have many and varied permutations. However it's exceedingly unlikely that any of the counters will be greater than 255, so you should be able to rely on bytes 4, 6, 8 and 10 all being zero.
Following the headers you'll start to find resource records, the first one being the actual question that was asked (§4.1.2). The unfortunate part here is that the designers of the protocol saw fit to include a variable length label field (QNAME) in front of two fixed fields (QTYPE and QCLASS).
[To further complicate matters labels can be compressed, using a backwards pointer to somewhere else in the packet. Fortunately you will almost never see a compressed label in the Question Section, since by definition you can't go backwards from there. Technically a perverse implementor could send a compression pointer back into the header, but that shouldn't happen].
So, start reading each length byte and then skip that many bytes until you reach a null byte, and then the next two 16 bit words will be QTYPE and QCLASS. There are very few legal values for QCLASS, and almost all packets will contain the value 1 for IN ("Internet"). You may occasionally see 3 for CH (Chaos).
That's it for now - if I think of anything else I'll add it later.

How about checking the port number? Should be 53 for both source and target port.

Related

ModBus to Click PLC

Looking for help with understanding how to change value in address DS1 (400001). First the click appears to use a 6 digit Modbus so not sure how to deal with the in 2 bytes. I think I read 40001 is the same but do not see how. I am able to receive data and understand the data when the Click PLC is the master. I would like my PC to be the master and change the address.
Here is the data I am sending to the PLC. I am expecting this data to be sent to PLC slave 02 and change the data in DS1 (400001) to the value of zero.
frame(0) = 2 'Slave Address =2
frame(1) = 6 'Mode =6
frame(2) = CByte(40001 / 256) '
frame(3) = CByte(40001 Mod 256) '
frame(4) = 0 '
frame(5) = 0 '
Dim crc As Byte() = CRC(frame) ' Call CRC Calculate.
frame(6) = crc(0) '=59 Error Check Lo
frame(7) = crc(1) '=189 Error Check Hi
SerialPort1.Write(frame, 0, frame.Length)
Realize that Application Layer addressing in Modbus is different than the bytes on the wire. The leading digit in an application layer address (e.g. 4xxxx for Holding Register) is implied in the function code (e.g. Read Holding Register)
So on the wire, you drop the leading 4, and left with an offset of 1-65536 (yes, Application Layer offsets are 1-based). But on the WIRE, they are 0-based, so you then subtact 1 from the offset to get the value 0-65535.
So, sometimes you see Application Modbus HRs like 4001, 40001, or 400001, all referencing the first HR in the device. 5 digit is most common. I do see 4 digit for old RTU devices. I do see a 6 digit every once in a while where the remote device has a ton of memory (or not, like Click).
Realize that a lot of devices are implemented by people who only understand the low level protocol, so when they say something is at address 40001, it may actually be at offset 0x0001, or 0x0000 (the correct offset on the wire). I even saw one implementation that implemented the address 40001 as literally 0x9C41 on the wire (maybe 0x9C40). Yes, 6 digit Application Layer Holding Register 440001.

Unusual unsigned short to bits swapping byte order

I'm reading in a stream of data, 64 bytes to be exact. I want to read 16 bits starting at the 480th bit of the incoming data. Unfortunately, I do not know what the incoming data type is, it's a bunch of random characters/boxes. Reading it in as an unsigned short (v), I get the number I am looking for, which for this example is 13.
my $satt_id = unpack("x60v1"), $msgdata); #$satt_id == 13
This results in $satt_id == 13, which is 00000000 00001101.
If I pull the data as 16 bits (b or B), the string does not reflect the value of 13, but rather is byte-swapped or reversed.
my $satt_idb = unpack("x60b16", $msgdata); #satt_idb == "10110000 00000000"
my $satt_idB = unpack("x60B16", $msgdata); #satt_idB == "00001101 00000000"
Why is this occurring? I want to alter the data and resend out the message, which would be relatively easy if all of the message elements were the same size (16 bits, just pack back as it was unpacked), but some are 6, 4, 2, and 1 bits. Should I just use little-endian b and then reverse? After altering the data reverse it back to original order and then pack it back as b?
Completely separate and not perl related, but this haunted me in a different utility. I just conceded by swapping the values in the Enum designation. It worked, just wasn't very viable when the amount of bits got higher than 4 (16 different values).
Thanks!
EDIT: I'm guessing this is just related to binary notation? Apparently starts from the right? So $satt_idb is correct, if you read right to left. So to make it more user friendly, just reverse, alter, then reverse again and repack?
EDIT2: Basically I'm trying to make a user-friendly method of editing messages coming through a data stream. As I mentioned in the comments, if I want to edit a single bit from 0 to 1 (which in the message represents something as true/false), I don't want the user to have to worry about editing the octet of data received, just select from a dropdown of true/false.
If it works with v, it means the data is in little-endian byte order, which means
0b0000000000001101
is stored as
0b00001101 0b00000000
which is what you got.
Should I just use little-endian b and then reverse?
No. You are likely doing something incorrect if you are converting the numbers to a text representation (binary).
If you did somehow want the binary representation of the number, you could use
sprintf("%16b", $num)

Possible bug in Pd patch

I have made a very simple patch, by which when a bang is triggered, it is meant to trigger a unique number between 0-2, in other words, no numbers are repeated.
In the way that I set it up, it is meant to work in theory. Even my programming mentor said that it should work, in theory, and he's generally a very smart man. He's informally known as being the boffin of the academy.
A few more details:
This happens in both purr data and pure data, with the exact same setup.
There are no external libraries are used. Just plain Vanilla objects.
Since there doesn't seem to be a way to attach the actual file itself, I will instead post an image of the code:
The problem is with depth-first processing (as used by Pd) and the related stack-unrolling, as this might lead to setting the 2nd input of [select] to an old value (which you didn't expect).
Example
Note: select:in0 means the left-most inlet of [select],... The numbers generated by [random] are shown in bold (1) and the numbers output the patch are shown in bold italics (e.g. 3)
Imagine the [select] is initialized to 0 and the [random 3] object outputs a list 2 0 0 2 0 2 ... (hint: [seed 96().
The expected output would be 2 0 2 0 2 ..., however the output really is 2 0 2 2 2 ...
Now this is what happens if you consecutively send [bang( to the random generator:
random generates 2
2 is sent to the sel:in0, which compares it to 0 (no match)
and sends it out of sel:out1 (the reject outlet), displaying the number 2
after that the number is sent to sel:in1, setting it's internal state to 2.
random generates 0
0 is sent to the sel:in0, which compares it to 2 (no match)
and sends it out of sel:out1, displaying the number 0
after that the number is sent to sel:1, setting it's internal state to 0.
random generates 0
0 is sent to the sel:in0, which compares it to 0 (match!)
and sends a bang through sel:out0 (the match outlet)
triggering a new call to random, which now generates 2
2 is sent to the sel:in0, which compares it to 0 (no match)
and sends it out of sel:out1, displaying the number 2
after that the number is sent to sel:1, setting it's internal state to 2.
after that the number 0 (still pending in the trigger:out0) is sent to sel:1, setting it's internal state to 0!!!
random generates 0
0 is sent to the sel:in0, which compares it to 0 (match!)
and sends a bang through sel:out0
triggering a new call to random, which now generates 2
2 is sent to the sel:in0, which compares it to 0
and sends it out of sel:out1, displaying the number 2
after that the number is sent to sel:1, setting it's internal state to 2.
after that the number 0 (still pending in the trigger:out0) is sent to sel:1, setting it's internal state to 0!!!
As you can see, at the end of #3 the internal state of [select] is 0, even though the last number generated by [random] was 2 (because the left-most outlet of [trigger] will only send to 0 after it has sent the 2, due to stack-unrolling).
Solution
The solution is simple: make sure that the state of [select] contains the last displayed value, rather than the last one generated on the stack. avoid feedback when modifying the internal state.
E.g (using local send/receive to for nicer ASCII-art)
[r $0-again]
|
[bang(
|
[random 3]
|
| [r $0-last]
| |
[select]
| |
| [t f f]
| | |
| | [s $0-last]
| |
| [print]
|
[s $0-again]

interpreting i2c register map for ISL12022

I am trying to program an ISL12022M RTC and am having trouble interpreting the register map (self taught with little experience). The documentation says that the RTC registers (SC,MN,HR,DT,MO,YR,DW) are BCD representations. In order to allow write capabilitiy into the RTC registers the WRTC bit(bit 6 of address 08h is set to '1'.The map looks like this:
The FAQ example from the Intersil site tells me that to set the WRTC bit I need to send DEh (slave address) 08h (register address) and 41 (Enable WRTC bit, other bits remain in default). Why not hex? Why 41 and not 40? And what does SC22 in SC bit 6, SC21 in bit 5, etc. mean?
Datasheet
Example
I've read the documentation until I can't see anymore and I've searched until I am just getting more confused. Any help is appreciated.
Well, it looks like these values in the map are nibbles. The range for the first register is 0 - 59. When represented in BCD, 4 bits are needed for the digit in the ones place and three bits are needed for the 10's place. So, bits 0 - 3 belong to the first nibble; bit 0 = SC(register name)1(first nibble)0(first bit). Bits 4, 5 and 6 belong to the second nibble. Bit 4 = SC(register name)2(second nibble)0(first bit). Bit 7 is not needed.
The example sheet from Intersil has a typo; the WRTC value needs to be 40h or 41h.

socket conversation terminator

While reading data in socket its important either keep a message terminator symbol or add the Packet size information at the begening of the message.
If a terminator symbol is used and a binary message is sent there is no guarantee that the terminator symbol would not appear in the middle of the message (unless some special encoding is used).
On the other hand if size information is attached. size information is unsigned and if one byte is used for it it cannot be used to transfer messages longer than 256 bytes. if 4 byte integer is used. its not even guaranteed that 4 bytes will come a s whole. just 2 bytes of the size information may come can assuming the size information has arrived it may use that 2 bytes and rest of the integer data will be discarded. waiting for 4 bytes to be available on read buffer may cause infinite awaiting if only 3 bytes are available on the buffer (e.g. if total buffer is 7 bytes or 4077 bytes long).
here comes two possible ways
sizeInfo separator chunk
read until the separator is found once found read until sizeInfo bytes passed
keep an unreadyBytes initialized at 4 upon receiving the sizeInfo change it accordingly
which one of these two is safer to use ? Please Criticize
Edit
My central question is how to make sure that the size bytes has arrived properly. assuming messages are of variable size.
its not even guaranteed that 4 bytes will come a s whole. just 2 bytes of the size information may come can assuming the size information has arrived it may use that 2 bytes and rest of the integer data will be discarded. waiting for 4 bytes to be available on read buffer may cause infinite awaiting if only 3 bytes are available on the buffer (e.g. if total buffer is 7 bytes or 4077 bytes long).
If you have a 4 bytes length descriptor you should always read at least 4 bytes, because the sender should have written this bytes in every message your server is receiving. If you can't get them, maybe there has been a problem in transmission. I really can't understand your problem.
Anyway I'll suggest to you not to use any separator chunk.
Put an header at data blocks you are transmitting and use a buffer to reconstruct the packet flow.
You must at least read the header of a packet to determine its length.
You can define a basic structure for a packet:
struct packet{
uint32 id;
char payload[MAX_PAYLOAD_SIZE];
};
The you read data from socket storing them into a buffer:
struct packet buffer;
Then you can read the data from the socket:
int n;
n = read(newsocket, &buffer, sizeof(uint32) + MAX_PAYLOAD_SIZE);
read returns the number of bytes read. If you read exactly a packet from the sender, then n = id. Otherwise maybe you read more data (es. the sender sent to you more packets). If you are receiving a stream of data split into unit (represented by packet structures), then you may use an array of packet to store the complete packet received and a temporarily buffer to manage incoming fragments.
Something like:
struct packet buffer[MAX_PACKET_STORED];
char temp_buffer[MAX_PAYLOAD_SIZE + 4];
int n;
n = read(newsocket, &buffer, sizeof(uint32) + MAX_PAYLOAD_SIZE);
//here suppose have received a packet of 100 Byte payload + 32 bit of length + 100 Byte
//fragments of the next packet.
//then:
int first_pack_len, second_pack_len;
first_pack_len = *((uint32 *)&temp_buffer[0]); //retrieve packet length
memcpy(&packet_buffer[0], temp_buffer, first_pack_len + sizeof(uint32)) //store the first packet into the array
second_pack_data_available_in_buffer = n - (first_pack_len + sizeof(uint32)); //total bytes read minus length of the first packet read
second_pack_len = *((int *)&temp_buffer[first_pack_len + sizeof(uint32)]);
I hope to have been clear enough. But maybe I'm misunderstanding your question.
Pay attention also that if the 2 end-systems communicating could have different endiannes, so it's a better idea use htonl/ntohl function on length when sending/receving length value. But this is another issue)