I would like to know whether there is any open free radius server which supports radius fragmentation i.e. radius server which accepts packets greater than 4k size limit from the client and will do reassembly of the packet at server end. And once whole packet is assembled, will do the successful authentication of the packet?
Any pointers will help.
I'm not aware of any, and not sure if you could even do this. Since there is no nice header structure I can't guarantee packets will be "complete". When we have a "large" packet, we end up splitting it and using a VSA carrying an association value. For example, we have an interim record with a VSA(AssociationID)=00512121 and another with that same association ID.
You could do a lot of black magic with CoA messages.
No, but there is an RFC currently in draft for this, instead of inventing your own scheme you should use the draft standard. When it's complete, we will add support to FreeRADIUS for packet fragmentation.
https://datatracker.ietf.org/doc/html/draft-perez-radext-radius-fragmentation-06
Related
I have a question regarding the TCPStream package in Rust. I want to read data from a server. The problem is that it is not guaranteed that the data is sent in one TCP package.
And here comes my question:
Is the read message capable of reading more than one package, or do I have to call it more than one? Is there any "best practice"?
From the user space TCP packets are not visible and their boundaries don't matter. Instead user space reads only a byte stream and writes only to a byte stream. Packetizing is done at a lower level in a way to be optimal for latency and bandwidth. It might well happen that multiple write from user space end up in the same packet and it might also happen that a single write will result in multiple packets. And the same is true with read: it might get part of a packet, it might get the payload taken from multiple consecutive packets ...
Any packet boundaries from the underlying transport are no longer visible from user space. Thus protocols using TCP must implement their own message semantic on top of the byte stream.
All of this is not specific to Rust, but applies to other programming languages too.
Sorry if this question has been asked before (I could not find any questions similar to mine), but is there a "maximum" buffer size that I should be sending over a socket at one time? If I were to for example send over data with a buffer size equal to that of the maximum allowed by sockets, would there be anything bad about that? Thanks in advance for any help.
It depends on the kind of sockets. With TCP a connection is a byte stream and the OS will take care of how best to split the bytes into packets and concatenate these together at the other side.
With UDP instead each send will result in a single UDP message (datagram) and there is an upper limit of 64k for the size of a datagram. But even while UDP supports 64k datagrams in theory it is not a good idea to use that large messages. Since the maximum transfer unit of the underlying layer is much smaller (like around 1500 bytes for Ethernet) the message needs to be fragmented and it is easy to loose a single fragment - in which case the whole message is considered lost.
You can send as much data as you want - that's the point of the abstraction. The underlying layers will do what they want, breaking things into chunks as necessary, but you shouldn't have to care about that as a user of the interface. The read and write interfaces both will return the length actually transmitted in the case of a partial read, so a simple loop should be sufficient to do a large transfer.
Assume I have defined my own application layer protocol on top of TCP for Instant Messaging. I have used a packet structure for the messages. As I am using symmetric (AES) and asymmetric (RSA) encryption, I obtain a different
packet size for different message types. Now to my questions.
How to read from a socket that I receive a single application layer packet?
What size should I specify?
Thanks in advance.
I have two approaches in mind.
Read from the TCP stream a fixed amount of bytes that represents the
actual packet size, and finally re-read from the stream the former gathered size of bytes.
Read the maximal packet size from the stream. Verify the actual size of
obtained bytes and decide so which message type it was.
Now, a more general question. Should I provide metadata like the packet size, encryption method, receiver, sender, etc.? If yes, should I encrypt these meta data as well?
Remember that with TCP, when reading from the network, there is no guarantee about the number of bytes received at that point in time. That is, a client might send a full packet in its write(), but that does not mean that your read() will receive the same number of bytes. Thus your code will always need to read some number of bytes from the network, then verify (based on the accumulated data) that you have received the necessary number of bytes, and then you can verify the packet (type, contents, etc) from there.
Some applications use state machine encoders/decoders and fixed size buffers for reading/writing their network data; other applications dynamically allocate buffers large enough for the "full packet", then continue reading bytes from the network until the "full packet" buffer is full. Which approach you take depends on your application. Thus the size you use for reading is not as important as how your code ensures that it has received a full packet.
As for whether you should encrypt additional metadata, that depends very much on your threat model (i.e. what threats your protocol wants to guard against, what assurances your protocol needs to provide to its clients/users). There's no easy way to answer that question without more context/details.
Hope this helps!
I am a newbie to networks and in particular TCP (I have been fooling a bit with UDP, but that's it).
I am developing a simple protocol based on exchanging messages between two endpoints. Those messages need to be certified, so I implemented a cryptographic layer that takes care of that. However, while UDP has a sound definition of a packet that constitutes the minimum unit that can get transferred at a time, the TCP protocol (as far as my understanding goes) is completely stream oriented.
Now, this puzzles me a bit. When exchanging messages, how can I tell where one starts and the other one ends? In principle, I can obviously communicate fixed length messages or first communicate the size of each message in some header. However, this can be subject to attacks: while of course it is going to be impossible to distort or determine the content of the communication, the above technique would make it easy to completely disrupt my communication just by adding a single byte in the middle.
Say that I need to transfer a message 1234567 bytes long. First of all, I communicate 4 bytes with an integer representing the size of the message. Okay. Then I start sending out the actual message. That message gets split in several packets, which get separately received. Now, an attacker just sends in an additional packet, faking it as if it was part of the conversation. It can just be one byte long: this completely destroys any synchronization mechanism I have implemented! The message has a spurious byte in the middle, and it doesn't successfully get decoded. Not only that, the last byte of the first message disrupts the alignment of the second message and so on: the connection is destroyed, and with a simple, simple attack! How likely and feasible is this attack anyway?
So I am wondering: what is the maximum data unit that can be transferred at once? I understand that to a call to send doesn't correspond a call to receive: the message can be split in different chunks. How can I group the packets together in some way so that I know that they get packed together? Is there a way to define an higher level message that gets reconstructed and aligned all together and triggers a single call to a receive-like function? If not, what other solutions can I find to keep my communication re-alignable even in presence of an attacker?
Basically it is difficult to control the way the OS divides the stream into TCP packets (The RFC defining TCP protocol states that TCP stack should allow the clients to force it to send buffered data by using push function, but it does not define how many packets this should generate. After all the attacker can modify any of them).
And these TCP packets can get divided even more into IP fragments during their way through the network (which can be opted-out by a 'Do not fragment' IP flag -- but this flag can cause that your packets are not delivered at all).
I think that your problem is not about introducing packets into a stream protocol, but about securing it.
IPSec could be very beneficial in your scenario, as it operates on the network layer.
It provides integrity for every packet sent, so any modification on-the-wire gets detected and the invalid packets are dropped. In case of TCP the dropped packets get re-transmitted automatically.
(Almost) everything is done automatically by the OS -- so yo do not need to worry about it (and make mistakes doing so).
The confidentiality can be assured as well (with the same advantage of not re-inventing the wheel).
IPSec should provide you a reliable transport protocol ontop of which you can use whatever framing format you like.
Another alternative is using SSL/TLS on top of TCP session which is less robust (as it does close the whole connection on integrity error).
Now, an attacker just sends in an additional packet, faking it as if it was part of the conversation. It can just be one byte long: this completely destroys any synchronization mechanism I have implemented!
Thwarting such an injection problem is dealt with by securing the stream. Create an encrypted stream and send your packets through that.
Of course the encrypted stream itself then has this problem; its messages can be corrupted. But those messages have secure integrity checks. The problem is detected, and the connection can be torn down and re-established to resynchronize it.
Also, some fixed-length synchronizing/framing bit sequence can be used between messages: some specific bit pattern. It doesn't matter if that pattern occurs inside messages by accident, because we only ever specifically look for that pattern when things go wrong (a corrupt message is received), otherwise we skip that sequence. If a corrupt message is received, we then receive bytes until we see the synchronizing pattern, and assume that whatever follows it is the start of a message (length followed by payload). If that fails, we repeat the process. When we receive a correct message, we reply to the peer, which will re-transmit anything we didn't get.
How likely and feasible is this attack anyway?
TCP connections are identified by four items: the source and destination IP, and source and destination port number. The attacker has to fake a packet which matches your stream in these four identifiers, and sneak that packet past all the routers and firewalls between that attacker and the receiving machine. The attacker also has to be in the right ballpark with regard to the TCP sequence number.
Basically, this is next to impossible for an attacker C to perpetrate against endpoints A and B which are both distant from C on the network. The fake source IP will be rejected long before C is able to reach its destination. It's more plausible as an inside job (which includes malware): C is close to A and B.
I understand that when sending ip messages around, each hop in the network path between be and my packet's destination will check if the next hop's MTU is bigger than the size of the packet I sent. If so, the packet will be fragmented and the two packets will be separately sent to the next hop, only to be reassembled at destination (or, in some cases, at the first NAT router encountered).
As far as I understand, this thing can be pretty bad, but I don't really understand why.
I understand that if the connection tends to drop a lot of packets, losing a single fragment means I have to resend the whole packet (this is actually the only thing I figured out myself)
Is there a chance that instead of being fragmented my packet will just be dropped?
How are packet fragments identified? Can I be 100% sure that they will be reassembled correctly? On example, if I send two ip packets of the same length nearly simultaneously to the same destination, how likely it is that fragments of the two will be swaped, like AAA, BBB reassembled into ABA, BAB?
In principle, if packets aren't dropped and fragments are reassembled correctly, actually using packet fragmentation seems like a good idea to save on local bandwidth and avoid having to send more and more headers instead of just one big packet.
Thank you
IP fragmentation can cause several problems:
1) Application layer loss is increased
As you mentioned, if a single fragment is dropped, the entire layer 4 packet will be lost. Thus, for a network with a small random packet loss rate, the application layer loss rate is increased by a factor approximately equal to the number of fragments for each layer 4 packet.
2) Not all networks handle fragmented packets
Some systems, such as Google's Compute Engine, do not reassemble fragmented packets.
3) Fragmentation can cause re-ordering
When routers split traffic down parallel paths, they may try to keep packets from the same flow on a single path. Because only the first fragment has layer 4 information like UDP/TCP port number, subsequent fragments may be routed down a different path, delaying assembly of the layer 4 packet and causing re-ordering.
4) Fragmentation can cause confusing behavior that is hard to debug
For example, if you send two UDP streams, A and B, from one source to a destination running Linux, the destination may discard packets from one of the streams. This is because by default, Linux "times out" fragment queues if more than 64 other fragments have been received from the same source. If stream A has a much higher data rate than stream B, 64 fragments from stream A may arrive in between the fragments from stream B, causing the B fragment to be dropped.
Thus, while IP fragmentation can reduce overhead by minimizing user headers, it may cause more trouble than it is worth.
To my knowledge, the only case where packets will be dropped rather than fragmented (barring cases where it would be dropped anyway), is packets which are marked "don't fragment". These packets are to be discarded rather than being fragmented.
Fragmented packets have identifier, fragment offset, and more fragments fields in their headers that, when combined, allow the destination host to reliably reassemble the packet upon receipt of all the fragments. The first fragment's offset is zero, and the last fragment has the more fragments flag set to zero. It is still possible (although very unlikely) to reassemble an incorrect packet if two packets' headers are mutated so their fragment offsets are exchanged, but their checksums are still valid. The probability of this happening is essentially zero. Bear in mind that IP does not provide any mechanism for ensuring the integrity of the data payload, only the integrity of the control information in the header.
Packet fragmentation necessarily wastes bandwidth because each fragment has a copy of [most of] the original datagram's header. Packets can be fragmented down to only 8 bytes per fragment, so we could have a maximum-sized packet at 60 + 65536 bytes fragmented into 60 * 8192 + 65536 bytes, yielding a payload increase of about 750% in the worst case. The only example I can come up with where you would come out ahead is if you fragmented a packet in order to send its fragments in parallel using some kind of Frequency Division Multiplexing scheme with the knowledge that the other channels are free. At that point, it still seems like it would require more work than would be saved to detect that circumstance and divide the packet rather than just sending it.
All the basic details about the mechanics of packet fragmentation in IP can be found in IETF RFC 791, if you're hungry for more information.