How to Enable Timestamp Option in IP Header - sockets

I am designing an application layer protocol on top of UDP. One of requirements is that the receiving side should keep only the most up to date datagram.
Therefore, if datagram A was sent and then datagram B was sent, but datagram B was received first, datagram A should be discarded by the application when received.
One way to implement this is a counter stored in the data part of the UDP packet. The counter is incremented each time a datagram is sent.
I also noticed that IP options contain a timestamp option which looks suitable for this task.
My questions are (in the context of BSD-like sockets):
How do I enable this option on the sending side?
How do I read this field on the receiving side?

You can set IP options using setsockopt() using option level IPPROTO_IP and specifying the name of the option. See Unix/Linux IP documentation, for example see here. Reading IP header options generally requires using a RAW socket which in turn usually requires root permissions. It's not advisable to (try to) use IP options because it may not always be supported since it's very rarely used (either at the origination system or at systems it passes).

Related

C# BeginSend/BeginReceive sometimes send or receive data attatched [duplicate]

I have two apps sending tcp packages, both written in python 2. When client sends tcp packets to server too fast, the packets get concatenated. Is there a way to make python recover only last sent package from socket? I will be sending files with it, so I cannot just use some character as packet terminator, because I don't know the content of the file.
TCP uses packets for transmission, but it is not exposed to the application. Instead, the TCP layer may decide how to break the data into packets, even fragments, and how to deliver them. Often, this happens because of the unterlying network topology.
From an application point of view, you should consider a TCP connection as a stream of octets, i.e. your data unit is the byte, not a packet.
If you want to transmit "packets", use a datagram-oriented protocol such as UDP (but beware, there are size limits for such packets, and with UDP you need to take care of retransmissions yourself), or wrap them manually. For example, you could always send the packet length first, then the payload, over TCP. On the other side, read the size first, then you know how many bytes need to follow (beware, you may need to read more than once to get everything, because of fragmentation). Here, TCP will take care of in-order delivery and retransmission, so this is easier.
TCP is a streaming protocol, which doesn't expose individual packets. While reading from stream and getting packets might work in some configurations, it will break with even minor changes to operating system or networking hardware involved.
To resolve the issue, use a higher-level protocol to mark file boundaries. For example, you can prefix the file with its length in octets (bytes). Or, you can switch to a protocol that already handles this kind of stuff, like http.
First you need to know if the packet is combined before it is sent or after. Use wireshark to check it the sender is sending one packet or two. If it is sending one, then your fix is to call flush() after each write. I do not know the answer if the receiver is combining packets after receiving them.
You could change what you are sending. You could send bytes sent, followed by the bytes. Then the other side would know how many bytes to read.
Normally, TCP_NODELAY prevents that. But there are very few situations where you need to switch that on. One of the few valid ones are telnet style applications.
What you need is a protocol on top of the tcp connection. Think of the TCP connection as a pipe. You put things in one end of the pipe and get them out of the other. You cannot just send a file through this without both ends being coordinated. You have recognised you don't know how big it is and where it ends. This is your problem. Protocols take care of this. You don't have a protocol and so what you're writing is never going to be robust.
You say you don't know the length. Get the length of the file and transmit that in a header, followed by the number of bytes.
For example, if the header is a 64bits which is the length, then when you receive your header at the server end, you read the 64bit number as the length and then keep reading until the end of the file which should be the length.
Of course, this is extremely simplistic but that's the basics of it.
In fact, you don't have to design your own protocol. You could go to the internet and use an existing protocol. Such as HTTP.

SCTP : transmitting with both interfaces at the same time

On my machine, I have 2 interfaces connected to another machine with 2 interfaces as well. I want to use both interfaces at the same time to transfer data. From SCTP view, each machine is an endpoint. So, I used a one-to-one socket. On the server side, I tried to bind INADDR_ANY as well as bind() the first and bindx() the second. On the client side, I tried connect() and connectx(). Whatever I tried, SCTP use only one of the two interfaces at a given time.
I also tested the sctp function on Iperf and the test app in the source code. Nothing works.
What am I missing here? Do you have to send each packet by hand from one or the other address and to one or the other address?
There surely must have a function where you can build several streams where each stream allows the communication between a pair of specific addresses. Then when you send a packet, SCTP chooses automatically which stream to send the packet in.
Thanks in advance!
What you are asking for called concurrent multipath transfer, feature that isn't supported by SCTP (at least not per RFC 4960).
As described in RFC 4960 by default SCTP transmits data over the primary path. Other paths are meant to be monitored by heartbeats and used when transmission over primary path fails.

Does raw sockets reassemble the packet?

I implement a simple tunneling and encryption of outgoing IP packets, i.e. each packet+IP header is encrypted and added with a new IP header.
For this purpose I use raw sockets in the sender and the receiver.
I just try to figure out if fragmentation of the outgoing packets can result in breaking the capability to decrypt them again.
Do raw sockets provide the assembled packet or do I need to implement de-fragmentation by myself ?
Assuming that you are referring to RAW sockets of the Berkeley Sockets API (aka BSD Sockets),
the answer is:
No, they do not combine fragments of fragmented IP packets. You will receive the IP packets, including IP header, just as they did arrive at your network interface.
Please note that there exist various implementations of BSD sockets in different operation systems. You didn't say for which system(s) you are developing that code. And despite the fact that the POSIX standard based its network API on BSD sockets, POSIX doesn't specify RAW sockets at all, so a POSIX conforming operation system doesn't even have to support RAW sockets.
And despite the fact many systems have adopted the BSD API, among them Linux/Android, FreeBSD, macOS/iOS, and even Windows, there are some important differences in their implementations. E.g. they support different socket options, their socket options behave in different way, or they support different extensions. As an example for differences in socket options, see here. So your system may theoretically have an option you can set to get reassembled packets. This would not be portable but RAW sockets themselves are only limited portable to begin with.
This is OS specific, but generally it depends on how you read them. Take a look at a couple of linux docs on POSIX sockets:
packet
socket
recvfrom
In particular you use a SOCK_RAW then recvfrom will not always return full packets. See the following quotes:
If a message is too long to fit in the supplied buffer,
excess bytes may be discarded depending on the type of socket the
message is received from.
If len is too small to fit an entire packet, the excess bytes will be returned from the next read.
The receive calls
normally return any data available, up to the requested amount,
rather than waiting for receipt of the full amount requested.
To your question:
Do raw sockets provide the assembled packet or do I need to implement de-fragmentation by myself ?
They don't, you need to de-fragment yourself. If the socket isn't flushed, or fragmentation occurs the call will return any data available, possibly only partial packets the expectation is that you restructure them.

Message delimitation in TCP communication

I am a newbie to networks and in particular TCP (I have been fooling a bit with UDP, but that's it).
I am developing a simple protocol based on exchanging messages between two endpoints. Those messages need to be certified, so I implemented a cryptographic layer that takes care of that. However, while UDP has a sound definition of a packet that constitutes the minimum unit that can get transferred at a time, the TCP protocol (as far as my understanding goes) is completely stream oriented.
Now, this puzzles me a bit. When exchanging messages, how can I tell where one starts and the other one ends? In principle, I can obviously communicate fixed length messages or first communicate the size of each message in some header. However, this can be subject to attacks: while of course it is going to be impossible to distort or determine the content of the communication, the above technique would make it easy to completely disrupt my communication just by adding a single byte in the middle.
Say that I need to transfer a message 1234567 bytes long. First of all, I communicate 4 bytes with an integer representing the size of the message. Okay. Then I start sending out the actual message. That message gets split in several packets, which get separately received. Now, an attacker just sends in an additional packet, faking it as if it was part of the conversation. It can just be one byte long: this completely destroys any synchronization mechanism I have implemented! The message has a spurious byte in the middle, and it doesn't successfully get decoded. Not only that, the last byte of the first message disrupts the alignment of the second message and so on: the connection is destroyed, and with a simple, simple attack! How likely and feasible is this attack anyway?
So I am wondering: what is the maximum data unit that can be transferred at once? I understand that to a call to send doesn't correspond a call to receive: the message can be split in different chunks. How can I group the packets together in some way so that I know that they get packed together? Is there a way to define an higher level message that gets reconstructed and aligned all together and triggers a single call to a receive-like function? If not, what other solutions can I find to keep my communication re-alignable even in presence of an attacker?
Basically it is difficult to control the way the OS divides the stream into TCP packets (The RFC defining TCP protocol states that TCP stack should allow the clients to force it to send buffered data by using push function, but it does not define how many packets this should generate. After all the attacker can modify any of them).
And these TCP packets can get divided even more into IP fragments during their way through the network (which can be opted-out by a 'Do not fragment' IP flag -- but this flag can cause that your packets are not delivered at all).
I think that your problem is not about introducing packets into a stream protocol, but about securing it.
IPSec could be very beneficial in your scenario, as it operates on the network layer.
It provides integrity for every packet sent, so any modification on-the-wire gets detected and the invalid packets are dropped. In case of TCP the dropped packets get re-transmitted automatically.
(Almost) everything is done automatically by the OS -- so yo do not need to worry about it (and make mistakes doing so).
The confidentiality can be assured as well (with the same advantage of not re-inventing the wheel).
IPSec should provide you a reliable transport protocol ontop of which you can use whatever framing format you like.
Another alternative is using SSL/TLS on top of TCP session which is less robust (as it does close the whole connection on integrity error).
Now, an attacker just sends in an additional packet, faking it as if it was part of the conversation. It can just be one byte long: this completely destroys any synchronization mechanism I have implemented!
Thwarting such an injection problem is dealt with by securing the stream. Create an encrypted stream and send your packets through that.
Of course the encrypted stream itself then has this problem; its messages can be corrupted. But those messages have secure integrity checks. The problem is detected, and the connection can be torn down and re-established to resynchronize it.
Also, some fixed-length synchronizing/framing bit sequence can be used between messages: some specific bit pattern. It doesn't matter if that pattern occurs inside messages by accident, because we only ever specifically look for that pattern when things go wrong (a corrupt message is received), otherwise we skip that sequence. If a corrupt message is received, we then receive bytes until we see the synchronizing pattern, and assume that whatever follows it is the start of a message (length followed by payload). If that fails, we repeat the process. When we receive a correct message, we reply to the peer, which will re-transmit anything we didn't get.
How likely and feasible is this attack anyway?
TCP connections are identified by four items: the source and destination IP, and source and destination port number. The attacker has to fake a packet which matches your stream in these four identifiers, and sneak that packet past all the routers and firewalls between that attacker and the receiving machine. The attacker also has to be in the right ballpark with regard to the TCP sequence number.
Basically, this is next to impossible for an attacker C to perpetrate against endpoints A and B which are both distant from C on the network. The fake source IP will be rejected long before C is able to reach its destination. It's more plausible as an inside job (which includes malware): C is close to A and B.

Sockets Asyn Connection

I am new to Async Socket Connection. Can you please explain. How does this technology work.
There's an existing application (server) which requires socket connections to transmit data back and forward. I already create my application (.NET) but the Server application doesn't seem to understand the XML data that I am sending. My documentation is giving me two ports one to Send and another one to Receive.
I need to be sure that I understand how this works.
I got the IP addresses and also the two Ports to be used.
A socket is the most "raw" way you can use to send byte-level TCP and UDP packets across a network.
For example, your browser uses a socket TCP connection to connect to the StackOverflow web server on port 80. Your browser and the server exchange commands and data according to an agreed-on structure/protocol (in this case, HTTP). An asynchronous socket is no different than a synchronous socket except that is does not block the thread that's using it.
This is really not the most ideal way to work (check and see if your server/vendor application supports SOAP/Web Services, etc), but if this is really the only way, there could be a number of reasons why it's failing. To name a few...
Not actually getting connected or sending data. Run a test using WinsockTool (http://www.isatools.org/tools/winsocktool.msi) and simulate your client first to make sure the server is working as expected.
Encoding incorrect - You're sending raw bytes across the network... Make sure you're using the correct encoding to convert your XML into bytes (ASCII, UTF8, etc).
Buffer Length - Your sending buffer (the amount of data you can transmit in one shot) may be too small or the server may expect a content of a certain length, and your XML could be getting truncated.
let's break a misconception... sockets are FULL-DUPLEX: you connect to a server using one port, then you can send AND receive data through the same socket, no need for 2 port numbers. (actually, there is a port assigned for receiving data, but it is: 1. assigned automatically when creating the socket (unless told so) and 2. of no use in the function calls to receive data)
so you tell us that your documentation give you 2 port numbers... i assume that the "server" is an already existing in-house application, and you are trying to talk to it. if the doc lists 2 ports, then you will need 2 sockets: one for sending and another one for receiving. now i would suggest you first use a synchronous socket before trying the async way: a synchronous socket is less error-prone for a first test.
(by the way, let's break another misconception: if well coded, once a server listen on a port, it can receive any number of connection through the same port number, no need to open 2 listening ports to accept 2 connections... sorry for the re-alignment, but i've seen those 2 errors committed enough time, it gives me a urge to kill)