I am using Red Hat JBoss AMQ 7.1.0.GA and testing flow control with producerWindowSize, I used example under amq71Install\examples\features\standard\queue, here is sample jndi.proerties:
# Neither of the following parameter works
#connectionFactory.ConnectionFactory=tcp://192.168.56.11:61616?producerWindowSize=1024
java.naming.provider.url=tcp://192.168.56.11:61616?producerWindowSize=1024
I send 10 messages with total size smaller than 1024 but still can see them arrived on broker, did I miss something or I misunderstood this parameter?
Best regards
Lan
Yes, I believe you've misunderstood this parameter.
The "producerWindowSize" is the number of credits which the client will request from the broker. Each credit corresponds to a byte of data. When the client receives those credits then it will be able to send that number of bytes. In your case the client requests 1024 credits from the broker which it receives, therefore it is able to send 1024 bytes before requesting more credits.
Since you're sending 10 messages with a total size smaller than 1024 then you should expect them to arrive on the broker without any issue.
Related
Our project has Microsoft Service Bus (on-prem ) running on Windows 2012 R2 servers for message processing.
When sending messages to service bus topic above the size limit (say 10 mb ) , services bus shows processing error – throws socket timeout exception.
Just wanted to know ,
if anyone has worked with sending messages (say > 10 MBs ) to Service Bus Topics . Would appreciate any suggested approach on how to handle this.
Also is there a way to increase the service bus timeout configuration or message size limit settings on Service Bus Topics either through Powershell cmds or Service Bus Explorer.
Service Bus queues support a maximum message size of 256 Kb (the header, which includes the standard and custom application properties, can have a maximum size of 64 Kb).
There is no limit on the number of messages held in a queue but there is a cap on the total size of the messages held by a queue. This queue size is defined at creation time, with an upper limit of 5 GB.
Are you asking about sending a message which is of size 10 MB? Service Bus doesn't allow that large message. For Premium, the maximum message size is 1 MB, and for Standard, it's 256 KB as #Ana said.
Also is there a way to increase the service bus timeout configuration
or message size limit settings?
Yes, there is a possibility to handle time-to-live property of messages which can be configured either at the time of Queue/Subscription creation or while sending Individual message. Refer to set Time to live for Queue as well as message.
Also is there a way to increase message size limit settings?
No, as the maximum size is 1 MB (May be increased by Azure in the future).
To answer this "Can we Send messages (say > 10 MBs ) to Service Bus Topics".
Now as of today, the updated answer will be YES: The Premium tier of Service Bus, enabling Message size up to 100 MB. Where as Standard is up to 256 KB as of today.
How to enabling large messages support for an existing queue (or topic)
Recommended:
While 100 MB message payloads are supported, it's recommended to keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace.
The Premium tier is recommended for production scenarios.
Assume I have defined my own application layer protocol on top of TCP for Instant Messaging. I have used a packet structure for the messages. As I am using symmetric (AES) and asymmetric (RSA) encryption, I obtain a different
packet size for different message types. Now to my questions.
How to read from a socket that I receive a single application layer packet?
What size should I specify?
Thanks in advance.
I have two approaches in mind.
Read from the TCP stream a fixed amount of bytes that represents the
actual packet size, and finally re-read from the stream the former gathered size of bytes.
Read the maximal packet size from the stream. Verify the actual size of
obtained bytes and decide so which message type it was.
Now, a more general question. Should I provide metadata like the packet size, encryption method, receiver, sender, etc.? If yes, should I encrypt these meta data as well?
Remember that with TCP, when reading from the network, there is no guarantee about the number of bytes received at that point in time. That is, a client might send a full packet in its write(), but that does not mean that your read() will receive the same number of bytes. Thus your code will always need to read some number of bytes from the network, then verify (based on the accumulated data) that you have received the necessary number of bytes, and then you can verify the packet (type, contents, etc) from there.
Some applications use state machine encoders/decoders and fixed size buffers for reading/writing their network data; other applications dynamically allocate buffers large enough for the "full packet", then continue reading bytes from the network until the "full packet" buffer is full. Which approach you take depends on your application. Thus the size you use for reading is not as important as how your code ensures that it has received a full packet.
As for whether you should encrypt additional metadata, that depends very much on your threat model (i.e. what threats your protocol wants to guard against, what assurances your protocol needs to provide to its clients/users). There's no easy way to answer that question without more context/details.
Hope this helps!
Does this value only affect incoming traffic? If i set this value to say 4MB on zookeeper server as well as zookeeper client and I start my client, will I still get data > 4MB when I do a request for a path /abc/asyncMultiMap/subs. If /subs has data greater than 4MB is the server going to break it up in chunks <= 4MB and send it in pieces to the client?
I am using zookeeper 3.4.6 on both client (via vertx-zookeeper) and server. I see errors on clients where it complains that packet length is greater than 4MB.
java.io.IOException: Packet len4194374 is out of range!
at org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
"This is a server-side setting"
This statement is incorrect, jute.maxbuffer is evaluated on client as well by Record implementing classes that receive InputArchive. Each time a field is read and stored into an InputArchive the value is checked against jute.maxbuffer. Eg ClientCnxnSocket.readConnectResult
I investigated it in ZK 3.4
There is no chunking in the response.
This is a server-side setting. You will get this error if the entirety of the response is greater than the jute.maxbuffer setting. This response limit includes the list of children of znodes as well so even if subs does not have a lot of data but has enough children such that their length of their paths exceeds the max buffer size you will get the error.
Just assume we are sending a packet with TCP and we found that packet has been dropped by network. After expiry of the time we try to resend the packet. In the meanwhile we got a new segment from application layer, and we are now trying to send both segments, the older one and new one with the sequence number which has not been acknowledged. Now the packet size is greater than older size. Just assume the older packet was delivered successfully but its acknowledgement was lost.
I explain this by some steps:-
1. (from sender) packet[SEQ=100,SEG_LEN=3,SEG="ABC"]----(To receiver)--->Receiver got it
2. (from receiver) packet[ACK=121]-----(To sender)---->Packet lost(Sender couldn't receive it)
3. We got new segment from application SEG="XYZ" and time expired for the previous packet
4. (from sender) packet[SEQ=100,SEQ_LEN=6,SEG="ABCXYZ"]----(To receiver)--->Receiver got it
So, now I want to know that what will happen at the receiver side,
Will it drop the packet by just assuming duplicate? or
It will accept the extra("XYZ") segment or total("ABCXYZ") segment.(?)
After expiry of the time we try to resend the packet.
Not necessarily, see below.
In the meanwhile we got a new segment from application layer, and we are now trying to send both segments, the older one and new one with the sequence number which has not been acknowledged.
Not necessarily. It is entirely possible that both segments are now coalesced and that you're only trying to send one segment at this point.
Now the packet size is greater than older size.
The segment size may be older than the old size, if what I said above is true. If it isn't, the sizes of the two packets you originally talked about haven't changed at all.
Just assume the older packet was delivered successfully but its acknowledgement was lost. So, now I want to know that what will happen at the receiver side
Will it drop the packet by just assuming duplicate?
It will drop the original packet if it was resent according to your postulate. If it was coalesced according to my postulate it will accept the new part of the packet, from the currently accepted point onwards.
It will accept the extra segment or total segment.
This question is too confused to answer. You need to make up your mind whether you're talking about two packets or a single coalesced packet. You're talking about both at the same time.
So, now I want to know that what will happen at the receiver side,
Will it drop the packet by just assuming duplicate?
No.
It will accept the extra("XYZ") segment
Yes.
or total("ABCXYZ") segment.(?)
No.
You need to think about this. TCP/IP works. Seriously. For 25 years at least. If any of the alternative scenarios you've posted were true it wouldn't be usable.
I would like to know whether there is any open free radius server which supports radius fragmentation i.e. radius server which accepts packets greater than 4k size limit from the client and will do reassembly of the packet at server end. And once whole packet is assembled, will do the successful authentication of the packet?
Any pointers will help.
I'm not aware of any, and not sure if you could even do this. Since there is no nice header structure I can't guarantee packets will be "complete". When we have a "large" packet, we end up splitting it and using a VSA carrying an association value. For example, we have an interim record with a VSA(AssociationID)=00512121 and another with that same association ID.
You could do a lot of black magic with CoA messages.
No, but there is an RFC currently in draft for this, instead of inventing your own scheme you should use the draft standard. When it's complete, we will add support to FreeRADIUS for packet fragmentation.
https://datatracker.ietf.org/doc/html/draft-perez-radext-radius-fragmentation-06