My question pertains to DIX (Ethernet II) and Ethernet: what is the link layer difference, if any? I don't see the link layer in the standard lists, but when doing, for instance, a pcap capture I see that wireshark will frame them differently (I'm not going to post pcap, but I'm sure the standards are posted).
source: http://www.tcpdump.org/linktypes.html
The Ethernet header has, in order:
a 6-octet destination address;
a 6-octet source address;
a 2-byte field.
In the original DEC/Intel/Xerox ("DIX") Ethernet specification, the 2-byte field was specified as a type field, giving an Ethernet type value specifying what protocol was running atop Ethernet; for example, a value of hex 0800 is used for IPv4.
In the original IEEE 802.3 specification, however, it was specified as a length field, giving the length of the payload following the Ethernet header. (Ethernet frames less than 64 octets, including the FCS, are padded to 64 octets; the length field allows the padding to be ignored. Some protocols, such as IPv4 and IPv6, include their own length field, so the padding can be ignored even without the Ethernet length field.)
If the 2-octet field isn't a type field, that leaves no mechanism for indicating what protocol is running atop Ethernet. The IEEE specified the IEEE 802.2 header, which follows the link-layer header in IEEE 802.x LANS (802.11, 802.5 Token Ring, 802.3 Ethernet, etc.), as well as FDDI; it includes 1-octet Destination Service Access Point (DSAP) and Source Service Access Point (SSAP) fields that can be used to specify the protocol running atop Ethernet.
So the difference between "DIX" Ethernet and "IEEE 802.3" Ethernet was, originally, that, in DIX Ethernet, the 2-octet field was a type field and there was no IEEE 802.2 header following the Ethernet header, whereas in IEEE 802.3 Ethernet, the 2-octet field was a length field and there was an IEEE 802.2 header following the Ethernet header.
The maximum length of an Ethernet frame is 1518 octets, including the 14-octet Ethernet header and the 4-octet FCS, so the maximum length of the Ethernet payload is 1518-(14+4) = 1500 octets. This means that the maximum value of an Ethernet length field is 1500.
The minimum value for an Ethernet type is hex 0600, or 1536. If the 2-octet field's value is between 0 and 1500, the field is a length field, and if it's greater than 1536, it's a type field. (If it's between 1501 and 1535, it's an invalid Ethernet frame.) This allowed DIX and IEEE 802.3 frames to be used on the same Ethernet segment.
IEEE Std 802.3x-1997 changed IEEE 802.3 so that it specified that the 2-octet field could either be a type field or a length field, and all subsequent versions of IEEE 802.3 have included that, so, starting at some point in 1997, DIX frames were also valid IEEE 802.3 frames.
Novell also ran their protocols directly atop IEEE 802.3, with no 802.2 header; their frames began with two hex FF octets, which meant that they would look like frames with the DSAP and SSAP values both set to hex FF. Hex FF is not a valid SSAP, as it has the "group address" bit set, so Novell frames without 802.2 ("Ethernet 802.3" frames) and 802.3 frames with an 802.2 header ("Ethernet 802.2" frames) can be distinguished from one another.
The DSAP and SSAP fields aren't sufficient to handle all protocol types, so the Subnetwork Access Protocol (SNAP) was devised. If the DSAP and SSAP in the 802.2 header are both hex AA, then the 802.2 header is followed by a SNAP header, which has a 3-octet Organizationally Unique Identifier (OUI) followed by a 2-octet Protocol ID (PID). The OUI is a number given out to organizations by the IEEE; it's used as the first 3 octets of MAC (Ethernet, 802.11, Token Ring, FDDI) addresses assigned to that organization (an organization can have multiple OUIs, so if they run out of MAC addresses in one OUI's range, they can assign more). The PID's interpretation is dependent on the OUI value. An OUI of 0 means that the PID is an Ethernet type value; other OUIs mean that it's a value assigned by the organization to whom that OUI belongs.
IPv4 and IPv6 packets sent over 802.x networks other than Ethernet, and over FDDI, have the link-layer header, the 802.2 header with the DSAP and SSAP both being AA, and a SNAP header with an OUI of 0 and an Ethernet type of hex 0800 (IPv4) or hex 86dd (IPv6). Over Ethernet, they'll have 0800 or 86dd in the type/length field, and no 802.2 header.
For more information, and some history about why all those frame types exist, see Don Provan's Ethernet Frame Types: Provan's Definitive Answer (archived at Wayback Machine).
The link-layer header types in pcap and pcapng files, as listed in the tcpdump.org link-layer header types page, correspond to formats for the octets that appear at the beginning of the packet data. LINKTYPE_ETHERNET/DLT_EN10MB, as that page says, corresponds to "IEEE 802.3 Ethernet", with a 6-octet destination address, a 6-octet source address, and a 2-byte type/length field, in order, so packets with a type field and packets with a length field are both covered by LINKTYPE_ETHERNET. They are not distinguished by the link-layer header type value; they are distinguished by the range in which the type/length field value appears (valid length field, valid type field, invalid field).
(And, yes, perhaps Wireshark shouldn't make as big a distinction between Ethernet frames with a type field and Ethernet frames with a length field; it should perhaps show them both as Ethernet frames, and show the 2-octet field as a type field if it's a type, a length field if it's a length, and as a "none of the above" field if it's invalid.)
Related
in RFC5389 MESSAGE-INTEGRITY calculation includes itself but with dummy content
dummy content is not defined
how can MESSAGE-INTEGRITY be verified without knowing dummy content value?
why would MESSAGE-INTEGRITY calculation include itself?
is't it faster to calculate MESSAGE-INTEGRITY and equally secure if it didn't include itself?
Since the MESSAGE-INTEGRITY attribute itself is not part of the hash, you can append whatever you want for the last 20 bytes. Just replace it with the hash of all the bytes leading up to the attribute itself.
The algorithm is basically this:
Let L be the original size of the STUN message byte stream. Should be the same as the value for MESSAGE LENGTH in the STUN message header.
Append a 4 byte header onto the STUN message followed by 20 null bytes
Adjust the LENGTH field of the STUN message to account for these 24 new bytes.
Compute the HMAC/SHA1 of the first L bytes of the message (all but the 24 bytes you just appended).
replace the 20 null bytes with the 20 bytes of the computed hash
And as discussed in comments, the bytes don't have to be null bytes, they can be anything - since they aren't included in the hash computation.
There's an implementation of MESSAGE-INTEGRITY for both short-term and long-term credentials on my Github: here and here
Imagine an offer SDP that has one line of "m" with codecs 8 and 101 for DMTF and marked as sendrecv:
m = audio 35904 RTP/AVP 8 101
a = rtpmap:8 PCMA/8000
a = rtpmap:101 telephone-event/8000
a = fmtp:101 0-15
a = sendrecv
The offered SDP is answered by a SDP with one line of "m" containing codecs 8 and 120 for DTMF similarly marked as sendrecv:
m = audio 1235 RTP/AVP 8 120
a = rtpmap:8 PCMA/8000
a = rtpmap:120 telephone-event/8000
a = fmtp:101 0-15
a = sendrecv
From RFC 3264:
For streams marked as sendrecv in the answer, the "m=" line MUST
contain at least one codec the answerer is willing to both send and
receive, from amongst those listed in the offer. The stream MAY
indicate additional media formats, not listed in the corresponding
stream in the offer, that the answerer is willing to send or
receive (of course, it will not be able to send them at this time,
since it was not listed in the offer).
Above part of the RFC3264, proves that sending a different DTMF fmtp(120 to 101) in answer SDP complies with RFC3264 since the codec 8(G711a) matches with the offer SDP.
Is it okay to say the codec exchange is completed successfully and DTMF exchange will okay or is DTMF is not expected to work at this point?
In general:
RTP payload type numbers 0-95 identify a static media encoding. E.g. payload type 8 means PCMA audio with a clock rate of 8000 Hz (RFC3551). As such, this description doesn't have to (but should) be included in the media format description of the SDP offer/answer, using the "a=rtpmap:" and "a=fmtp:" attributes (RFC4566).
Payload type numbers 96-127 are dynamic. These can be used to negotiate encodings that aren't included in the static list. When using one of these numbers, an encoding specification has to be included in the media format description to specify the exact encoding parameters.
Both negotiating parties can choose their own dynamic payload type number to represent the same media encoding, this doesn't have to be the same number. This can be useful when a party already assigned a particular dynamic payload type number to another encoding. In your example one party uses 101 in the m-line and the other one uses 120, but these numbers represent the same media encoding (see "a=rtpmap:" lines). Each party tells the other 'when you send RTP using encoding X you must include payload type number Y in the RTP packet headers.
The payload type number is included in the PT field of RTP packets headers (RFC 3550)
In this case:
The "a=fmtp:" attribute in the answer specifies 101 as payload type number instead of 120. That means it doesn't apply to the telephone-events payload and no information is available as to which DTMF events are supported (RFC 4733). I think this is an implementation error and the fmtp attribute is meant to apply to the telephone-events payload.
It is an indication that you should expect DTMF issues. But it could also all work fine. Give it a try...
I'm working on an implementation of TCP for a class and I'm wondering what the Window Size field actually mean.
I understand that the window size is the number of bytes, but does that number of bytes apply to:
the payload of the TCP Segment, not including the header or to
the entire TCP Segment, including the header?
Thus far, I've looked on Wikipedia:
RFC 793 states that:
The window indicates an allowed number of octets that the sender may
transmit before receiving further permission.
RFC 2581 states that:
receiver's advertised window (rwnd) is a receiver-side limit on the
amount of outstanding data
Neither of these make it particularly clear. Anyone?
It applies to the payload only. The sender can always transmit ACKs, FINs, RSTs, etc., with no payload.
My service should not collect user access IP address to identify user, this is illegal in my country. And stored data must not be able to decrypted, for the case that server got attacked. So, I should corrupt some bit of IP address.
I believe that corrupting some bits of IP address makes me not to violate the law, and provide good defense against strong rainbow tables.
But I want to maintain the uniqueness of IP address as much as possible.
Which bit is more important than other bits in IP address? (32bit)
First a little bit about IP address structure:
IP addresses are aggregated by prefix. Take my own IPv4 address 37.77.56.75. In this example the ISP has block 37.77.56.0/21, which means that the prefix is 21 bits long, and the last (32 - 21 =) 11 bits can be used by the ISP. The ISP delegated to me the prefix 37.77.56.64/27, which leaves me (32 - 27 =) 5 bits to use. I put that whole prefix the LAN of my home network. I then chose to use bits 01011 for my PC, which in this prefix gives IPv4 address 37.77.56.75.
For IPv6 the structure is the same. The addresses are just 128 bits long and written down in hexadecimal (which matches the binary structure and prefix lengths much better than the decimal notation of IPv4). For IPv6 the addresses in this example are:
The ISP has
2a00:8640::/32, delegates
2a00:8640:0001::/48 to me, I put
2a00:8640:0001:0000::/64 on my home LAN, and my PC has address
2a00:8640:0001:0000:3528:2df9:b368:e9e9.
Usually you don't write all the leading zeroes in IPv6 addresses, but I included them for clarity.
What you probably need:
If I understand your question correctly you want to maintain the uniqueness of each address but in such a way that the original address cannot be recovered. The way to do that is to use a hashing algorithm. Make sure you always input the addresses in the same way as either binary strings or if you use a printable string make sure you always use the canonical representation. You can use inet_pton/inet_ntop for that. This is an example in Python:
import md5
import socket
bad_v4 = '010.001.002.003'
binary_v4 = socket.inet_pton(socket.AF_INET, bad_v4)
canonical_v4 = socket.inet_ntop(socket.AF_INET, binary_v4)
hash_v4 = md5.md5(canonical_v4).hexdigest()
print 'Bad IPv4:', bad_v4
print 'Good IPv4:', canonical_v4
print 'MD5 digest (in hex):', hash_v4
print ''
bad_v6 = '2A00:8640:001:0:0:0:aB0:cDeF'
binary_v6 = socket.inet_pton(socket.AF_INET6, bad_v6)
canonical_v6 = socket.inet_ntop(socket.AF_INET6, binary_v6)
hash_v6 = md5.md5(canonical_v6).hexdigest()
print 'Bad IPv6:', bad_v6
print 'Good IPv6:', canonical_v6
print 'MD5 digest (in hex):', hash_v6
This will give you this output:
Bad IPv4: 010.001.002.003
Good IPv4: 10.1.2.3
MD5 digest (in hex): 447d3c6954efb460e6f47e331615176f
Bad IPv6: 2A00:8640:001:0:0:0:aB0:cDeF
Good IPv6: 2a00:8640:1::ab0:cdef
MD5 digest (in hex): b3d5aa35466b0564044ecfb6f558615c
And then use the hash as the identifier instead of the address.
I am most likely missing something here, but the PCAP specification does not show the sender IP address and PORT of the packet captured.
Is there a way that I can know who sent the packet in the PCAP file?
http://wiki.wireshark.org/Development/LibpcapFileFormat
As per what EJP said, you will have to parse the packet data yourself. See the tcpdump.org link-layer header type page for a list of the values for the network field in the file header and the corresponding format of the headers at the beginning of the packet data.
You need to look at those headers to determine whether the packet is an IP packet; if it is, then you need to parse the IPv4 or IPv6 header (depending on whether the headers indicate that it's an IPv4 or IPv6 packet, or, alternatively, on whether the "version" field in the header is 4 or 6 - the "version" field appears in the same location in the IPv4 and IPv6 header; for LINKTYPE_RAW, you would have to look at the "version" field, as there are no headers in front of the IPv4 or IPv6 header) to find the source IP address. See RFC 791 for the form of the IPv4 header; see RFC 2460 for the form of the IPv6 header.
If you want port numbers, you will have to check the "Protocol" field of the IPv4 header, or check the "Next header" field of the IPv6 header and handle extension headers, to determine what protocol is being carried on top of IP. See the IANA Protocol Numbers registry for the values of that field; TCP is 6 and UDP is 17. If the protocol is TCP, see RFC 793 for the format of the TCP header; if the protocol is UDP, see RFC 768 for the format of the UDP header.
Or you might want to use an existing packet parsing library, such as libtrace for C or C++ or other libraries for other languages (I think they may exist for Perl, Python, C#, and Java, for example), as that may let you avoid doing a lot of the above.
(For that matter, you shouldn't need to be looking at the pcap format specification; you should be using libpcap/WinPcap to read the pcap file, as that also means your program may be able to read some pcap-ng files as well, if it's using a sufficiently recent version of libpcap.)
The packet origin is in the IP packet itself. So it doesn't need to be in the PCap headers as well.
I was able to get the IP address and port numbers of both source and destination endpoints using the below Github example:
https://github.com/arisath/Pcap-dissection/blob/master/PcapDissector.java