Send binary message using Net::Stomp::Client - perl

I need to send a binary message to Message Broker using the perl library Net::Stomp::Client. But whenever I send a message using the send or send_with_receipt methods, the message is received as a Text message.
I'm using ActiveMQ in my server, and when I call consume, the received message is of type TextMessage. I need it to be of type BytesMessage.
Update:
I see in this link that setting the content-length header will set the type to Bytes message... But I didn't find any example using Net::Stomp::Perl... If anyone can provide an example it would be great...

I solved this by adding bytes_message => 1 to the send() method
In newer versions you need to use stomp 1.1 or greater (default is 1.0) (pass version or accept_version to the stomp client constructor) and higher and set the content-type

Related

Bidirectional communication of Unix sockets

I'm trying to create a server that sets up a Unix socket and listens for clients which send/receive data. I've made a small repository to recreate the problem.
The server runs and it can receive data from the clients that connect, but I can't get the server response to be read from the client without an error on the server.
I have commented out the offending code on the client and server. Uncomment both to recreate the problem.
When the code to respond to the client is uncommented, I get this error on the server:
thread '' panicked at 'called Result::unwrap() on an Err value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/main.rs:77:42
MRE Link
Your code calls set_read_timeout to set the timeout on the socket. Its documentation states that on Unix it results in a WouldBlock error in case of timeout, which is precisely what happens to you.
As to why your client times out, the likely reason is that the server calls stream.read_to_string(&mut response), which reads the stream until end-of-file. On the other hand, your client calls write_all() followed by flush(), and (after uncommenting the offending code) attempts to read the response. But the attempt to read the response means that the stream is not closed, so the server will wait for EOF, and you have a deadlock on your hands. Note that none of this is specific to Rust; you would have the exact same issue in C++ or Python.
To fix the issue, you need to use a protocol in your communication. A very simple protocol could consist of first sending the message size (in a fixed format, perhaps 4 bytes in length) and only then the actual message. The code that reads from the stream would do the same: first read the message size and then the message itself. Even better than inventing your own protocol would be to use an existing one, e.g. to exchange messages using serde.

PlayWS calculate the size of a http call without consuming the stream

I'm currently using the PlayWS http client which returns an Akka stream. From my understanding, I can consume the stream and turn it into a Byte[] to calculate the size. However, this also consumes the stream and I can't use it anymore. Anyway around this?
I think there are two different aspects related to the question.
You want to know the size of the server response in advance to prepare buffer. Unfortunately there is no guaranteed way to do this. HTTP 1.1 spec explicitly allows transfer mode when the server does not know the size of the response in advance via chunked transfer encoding. See also quote from 3.3.1. Transfer-Encoding:
A recipient MUST be able to parse the chunked transfer coding
(Section 4.1) because it plays a crucial role in framing messages
when the payload body size is not known in advance.
Section 3.3.3. Message Body Length specifies how length of a message body is defined and it besides the aforementioned chunked transfer encoding it also contains quite unhelpful
Otherwise, this is a response message without a declared message
body length, so the message body length is determined by the
number of octets received prior to the server closing the
connection.
This is added for backward compatibility and discouraged from usage but is still legally allowed.
Still in many real world scenarios you can use Content-Length header field that the server may return. However there is a catch here as well: if gzip Content-Encoding is used, then Content-Length will contain size of the compressed body.
To sum up: in general case you can't get the size of the message body in advance before you fully get the server response i.e. in terms of code perform a blocking call on the response. You may try to use Content-Length and it might or might not help in your specific case.
You already have a fully downloaded response (or you are OK with blocking on your StreamedResponse) and you want to process it by first getting the size and only then processing the actual data. In such case you may first use getBodyAsBytes method which returns IndexedSeq[Byte] and thus has size, and then convert it into a new Source using Source.single which is actually exactly what the default (i.e. non-streaming) implementation of getBodyAsSource does.

Implementation of IDNs in JIDs as specified in RFC 6122

I have added International Domain Name support to an XMPP client as specified in RFC 6122. In the RFC it states:
Although XMPP applications do not communicate the output of the
ToASCII operation (called an "ACE label") over the wire, it MUST be
possible to apply that operation without failing to each
internationalized label.
However, with the domain I have available for testing (running Prosody 0.9.4; working on getting feedback from someone else about how Ejabberd handles this), sending a Unicode name in the "to" field of an XMPP stanza causes them to immediately return an XMPP error stanza and terminate the stream. If I apply the toASCII operation before sending the stanza, the connection succeedes, and I can begin authentication with the server.
So sending:
<somestanza to="éxample.net"/>
Would cause an error, while:
<somestanza to="xn--xample-9ua.net"/>
works fine.
Is it correct to send the ASCII representation (ACE label) of the domain like this? If so, what does the spec mean when it says that "XMPP applications do not communicate the output of the ToASCII operation ... over the wire"? If not, how can I ensure compatibility with misbehaving servers?

Openfire sends empty (without stamp attr) jabber:x:delay extension to smack

I receive offline message from openfire server, but it contains empty jabber:x:delay extension.
The message I receive is:
<message id="qU7N8-64" to="ac1#server.jj.ru" from="ac2#server.jj.ru/4847791" type="chat">
<body>test message</body>
<delay xmlns="urn:xmpp:delay"></delay>
<x xmlns="jabber:x:delay"></x>
</message>
This message I receive with smack library.
But when I connect to openfire with Miranda IM, openfire sends extension jabber:x:delay with data.
Why openfire sends empty jabber:x:delay only to smack library?
Add this line after connection.
ProviderManager.getInstance()addExtensionProvider("x","jabber:x:delay", new DelayInformationProvider());
Openfire doesn't do anything different since it doesn't know (or care) what client is connected. The packet you are showing is very peculiar, since it contains both the legacy and current versions of Delayed Delivery, but with missing required attributes in both.
Try running with VM argument -Dsmack.debugEnabled=true set. Then check the incoming raw packets for the actual message content. There is most likely one of 2 things happening.
The time is missing, so Miranda is compensating by populating it with some default value, like current date.
The time format is not according to spec, so the parser in Smack is omitting it.

JMeter TCP Sampler doesn't recognize end of stream

I'm trying to run a simple test with TCP Sampler
When using default TCPClient class, after the response timeout time passes, I receive a correct response from the server, and then Error 500 in sampler results:
Response code: 500 Response message:
org.apache.jmeter.protocol.tcp.sampler.ReadException:
It seems like that JMeter does not recognize end of message characters (the server sends \r\n).
How can I configure JMeter to see the EOM?
I tried to use BinaryTCPClientImpl, and set tcp.BinaryTCPClient.eomByte = 13 in Jmeter.properties, but binaryTCPClient expects HEX data, while I send human readable string, and it still ignores the eomByte...
Any ideas?
Found the problem.
Those server did not sent \r\n in several cases.
Everything started working after the server was fixed.
I came accross the same behaviour and examined the offered responses (sending \r\n at the end of the message in the server side \ setting the eol byte value option in the gui) but it didn't work for me.
My solution: Following this question I found that \0 is the eol character Jmeter expects in TCP sampler. when my server terminates the messages with \0 - the message is receieved in Jmeter.
Some more references: Jmeter documentation (TCPClientImpl chapter - that's where the tcp.eolByte is discussed).
Some option: if the size of the messages is constant, one can examine LengthPrefixedBinaryTCPClientImpl (see this discussion).
Anyone give me a solution for this error. Why I m getting 500 response code why jmeter throwing the read exception and what is the solution for this error if I already received my success response.
TCP Sampler will not return until EOF is received.
Set your own EOF byte value at TCP Sampler.
The server sends a stream at the end of EOF byte value.