I got this defitiniton below is it true and what is the difference between packet and payload?
The ESP (Encapsulation Security Payload) protocol is a member of IPSec suite. Its
purpose is to guarantee payload's (message) integrity, data origin authentication of IP
packets and confidentially of the payload.
It does provide protection for the entire packet, not only to the payload.
A payload is the part of the packet with the actual info (the good stuff!) There are other parts of a packet, the packet headers, that describe the payload, like how big it is.
The description is telling you that protection is provided for everything in the packet, including it's headers. Hope that helps!
The answer is:
In tunnel mode it secures whole packet in transport mode does not.
Related
I'm working on an gameServer that communicate with game client, but wonder whether the packet server send to client remain sequence when client received it ?
like server sends packets A,B,C
but the client received B,A,C ?
I have read the great blog http://packetlife.net/blog/2010/jun/7/understanding-tcp-sequence-acknowledgment-numbers/
It seems that every packet send by the server has an ack corresponding by client, but it does not say why the packet received by client has the same sequence with server
It's worth reading TCP's RFC, particularly section 1.5 (Operation), which explains the process. In part, it says:
The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order by the internet communication system. This is achieved by assigning a sequence number to each octet transmitted, and requiring a positive acknowledgment (ACK) from the receiving TCP. If the ACK is not received within a timeout interval, the data is retransmitted. At the receiver, the sequence numbers are used to correctly order segments that may be received out of order and to eliminate duplicates. Damage is handled by adding a checksum to each segment transmitted, checking it at the receiver, and discarding damaged segments.
I don't see where it's ever made explicit, but since the acknowledgement (as described in section 2.6) describes the next expected packet, the receiving TCP implementation is only ever acknowledging consecutive sequences of packets from the beginning. That is, if you never receive the first packet, you never send an acknowledgement, even if you've received all other packets in the message; if you've received 1, 2, 3, 5, and 6, you only acknowledge 1-3.
For completeness, I'd also direct your attention to section 2.6, again, after it describes the above-quoted section in more detail:
An acknowledgment by TCP does not guarantee that the data has been delivered to the end user, but only that the receiving TCP has taken the responsibility to do so.
So, TCP ensures the order of packets, unless the application doesn't receive them. That exception probably wouldn't be common, except for cases where the application is unavailable, but it does mean that an application shouldn't assume that a successful send is equivalent to a successful reception. It probably is, for a variety of reasons, but it's explicitly outside of the protocol's scope.
TCP guarantees sequence and integrity of the byte stream. You will not receive data out of sequence. From RFC 793:
Reliable Communication: A stream of data sent on a TCP connection is delivered reliably and in
order at the destination.
I am designing an application layer protocol on top of UDP. One of requirements is that the receiving side should keep only the most up to date datagram.
Therefore, if datagram A was sent and then datagram B was sent, but datagram B was received first, datagram A should be discarded by the application when received.
One way to implement this is a counter stored in the data part of the UDP packet. The counter is incremented each time a datagram is sent.
I also noticed that IP options contain a timestamp option which looks suitable for this task.
My questions are (in the context of BSD-like sockets):
How do I enable this option on the sending side?
How do I read this field on the receiving side?
You can set IP options using setsockopt() using option level IPPROTO_IP and specifying the name of the option. See Unix/Linux IP documentation, for example see here. Reading IP header options generally requires using a RAW socket which in turn usually requires root permissions. It's not advisable to (try to) use IP options because it may not always be supported since it's very rarely used (either at the origination system or at systems it passes).
I just read "What is the difference between a port and a socket?" and seems socket is something to create connections. And then how about a packet? Is something sending between the connection? So the progress is "ip -> port -> socket -> sending packet" ?
A packet is a chunk of data.
All IP networks send data in small chunks across the network.
A socket(in relation to TCP/IP) is an abstraction built on top of this, that provides a reliable stream of data.
When using a socket, rather than deal with sending individual packets of data, you just send unlimited streams of data.
The socket implementation deals with splitting it into packets and sending it over the network and handles resending packets that are lost on the way.
A socket is a combination of an IP address and port number.
A packet is a layer 3 protocol data unit, or a piece of data associated with the network layer.
As far as the "progress" you mention, the OSI model is a helpful tool to describe the flow.
Each OSI model layer has an associated data unit. You can see above that a packet is a piece of data associated with the network layer. The network layer you're describing uses IP addresses to communicate.
Layer 4, or the Transport layer, uses port numbers for communication. A socket is the combination of port number and IP address.
The flow from the sender's perspective goes down the OSI model. Application data is surrounded with Transport headers (source and destination port numbers), then Network headers (source and destination IP addresses), then data-link headers (typically MAC addresses on an Ethernet LAN) and finally encoded as bits on the wire.
The flow from the recipient's perspective is just the reverse, climbing up the stack. Bits are received on the wire, then data is slowly "unpacked", removing headers. If the destination MAC matches the receiver, it strips those headers, if the IP matches, it strips those headers, if an open port is found, those headers are removed, finally resulting in unpacked application level data in the higher layers not shown here.
Hope this helps clarify.
A socket is the abstraction you use to send packets of data.
The socket is bound to your system to allow the communcation between two process.
The packet is a fragment of information that is send through the socket.
In a Modbus server implementation, what response should the server send if it receives a request from the client that contains too few (or no) data bytes to interpret correctly?
For example, a modbus RTU server (with address 0x01) receives the ADU datagram: 0x01, 0x01, 0xE0, 0xC1. In this case no physical transport layer errors are detected, the address is correct, the CRC is correct and the function (Read Coils) is correct and implented on the server, but the PDU does not contain the Starting Address or Quantity of Inputs fields required to process the request.
Should the server assume that a (very rare) bit error has occurred and not respond at all?
Should the server interpret this as 'a value in the query data field' being not allowed for the server and respond with an ILLEGAL DATA VALUE exception?
Should the server do something completely different?
In my experience, at least with Modbus TCP, devices tend to just ignore malformed requests.
From the specification MODBUS APPLICATION PROTOCOL SPECIFICATION V1.1b3 the exception (code 3) is correct. Figure 9 MODBUS Transaction state diagram clearly indicates the exception response to an incorrectly formed message.
I suspect the common response that rejects the message is indistinguishable from a transmission error, and the implementor of the faulty client will then be induced to correct their implementation.
Your suggestion that a communication error triggers this is possible, but only if the underlying link does not detect missing bytes. Any byte, other than 0xFF will introduce a start bit into a serial channel, and a missing byte in the TCP/UDP implementations is even less likely.
One of the reason that UDP is not a good choice even for localhost communication is due to out of ordering, but if I can limit the size of datagram that fragmentation would not occur,
e.g. limit to 1KB of data, so can I assume that the reliability of UDP is the same as TCP?
[1] Why do I get UDP datagrams out of order even with processes runnning locally?
No, it's not the same.
Getting in-sequence packets is not the only thing that comes into picture when you talk about reliability, there's more to it.
From RFC 768 (User Datagram Protocol):
This protocol provides a procedure for application programs to
send messages to other programs with a minimum of protocol
mechanism. The protocol is transaction oriented, and delivery and
duplicate protection are not guaranteed. Applications requiring
ordered reliable delivery of streams of data should use the
Transmission Control Protocol (TCP) [2].
So, by keeping small size for datagrams, you may ensure that the out-of-order delivery never happens but still you can't ensure the data is correctly received at the other end. This still holds good even if you are sending data on a local host. A bit-error can occur for any unknown reason, that's why you have check-sum in the header. If the check-sum at the receiving end doesn't match then the packet is discarded without the sender knowing about it. This doesn't happen in TCP since the receiver sends an ACK to the sender on receiving the correct data.