What are the files in which we would need to add the new custom messages apart from the required datadictionary (FIX50SP2.xml and FIX50SP2.modified.xml) so as to generate the custom message classes and facilitate successful parsing of the same ? Even after having added the new message definition in the xmls and after successful generation of the classes, the parsing of the message gets failed in the acceptor class in the onMessage method while extracting the repeating groups. Seems like the groups in the custom messages are not getting initialized at the acceptor end after receiving. The size is always 0 in the case where as for the other predefined messages, the groups and the fields are initialized and hence retrievable. Would be great if someone can provide insight into it!
Refer: How to retrieve repeating groups from the FIX messages using quickfixj library?
Related
I am using spring cloud contract for messages as described in
https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/1.2.1.RELEASE/single/spring-cloud-contract.html#_spring_cloud_contract_verifier_messaging
Everything works as described in the documentation.
I have one situation where my triggeredBy method raises two messages on same channel (e.g. SMS to be sent to two different parties) and I am not able to assert both the messages correctly. The messages are received in random order and sometimes the test cases pass and sometimes it fails.
I need a way to assert both the messages correctly.
The OutputMessage has also the assertThat method (https://github.com/spring-cloud/spring-cloud-contract/blob/v1.2.1.RELEASE/spring-cloud-contract-spec/src/main/groovy/org/springframework/cloud/contract/spec/internal/OutputMessage.groovy#L35), it's described here (https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/1.2.1.RELEASE/single/spring-cloud-contract.html#contract-dsl-common). You can assert also the other message there. You can also use that assertion in the input part to know exactly which message was sent and then you can send the missing one too.
i was wondering if there is a way to implement metadata or even multiple metadata to a service bus queue message to be used later on in an application to sort on but still maintaining FIFO in the queue.
So in short, what i want to do is:
Maintaining Fifo, that s First in First Out structure in the queue, but as the messages are coming and inserted to the queue from different Sources i want to be able to sort from which source the message came from with for example metadata.
I know this is possible with Topics where you can insert a property to the message, but also i am unsure if it is possible to implement multiple properties into the topic message.
Hope i made my self clear on what i am asking is possible.
I assume you use .NET API. If this case you can use Properties dictionary to write and read your custom metadata:
BrokeredMessage message = new BrokeredMessage(body);
message.Properties.Add("Source", mySource);
You are free to add multiple properties too. This is the same for both Queues and Topics/Subscriptions.
i was wondering if there is a way to implement metadata or even multiple metadata to a service bus queue message to be used later on in an application to sort on but still maintaining FIFO in the queue.
To maintain FIFO in the queue, you'd have to use Message Sessions. Without message sessions you would not be able to maintain FIFO in the queue itself. You would be able to set a custom property and use it in your application and sort out messages once they are received out of order, but you won't receive message in FIFO order as were asking in your original question.
If you drop the requirement of having an order preserved on the queue, the the answer #Mikhail has provided will be suitable for in-process sorting based on custom property(s). Just be aware that in-process sorting will be not a trivial task.
I'd like to model an Apache Camel route that accepts tcp requests containing xml messages.
Each message may result in a multitude of responses which should be sent back on the incoming socket. I've played around with the camel-netty component in sync mode which works for single messages.
But is it possible to send back multiple messages on the socket? Basically a split before the return.
from(String.format("netty:tcp://0.0.0.0:%s?sync=true&decoders=#length-decoder,#string-decoder&encoders=#string-encoder,#length-encoder", INBOUND_PORT))
.id("my-mock")
.unmarshal(jaxbDataFormat)
.process(exchange -> {
List<String> responses = service.accept(exchange.getIn().getBody(MyXmlRootElement.class));
exchange.getOut().setBody(responses);
})
.split().body() //Split is not doing what it should. Should become multiple messages, and each should be returned with a delay
.delay(2000);
My messages are length-encoded containing an integer at first 4 bytes specifying the length of each individual message.
In my case the exception is IllegalArgument, stating that the endpoint does not support ArrayList as the payload.
Caused by: [java.lang.IllegalArgumentException - unsupported message type: class java.util.ArrayList]
at org.apache.camel.component.netty.handlers.ServerResponseFutureListener.operationComplete(ServerResponseFutureListener.java:53) ~[camel-netty-2.16.0.jar:2.16.0]
at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:409) [netty-3.10.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:395) [netty-3.10.4.Final.jar:na]
Cheers.
That is not how its designed, the sync option on netty is for sending one response message when the route ends.
I have designed this as well for single messages and that works. For multiple response messages, you could try to aggregate them as one and send that back to the client. Assuming off course aggregation is possible in your case.
I have read about the true messaging and that instead of sending payload on the bus, it sends an identifier. In our case, we have a lot of legacy apps/services and those were designed to receive the payload of messages (xml) that is close to 4MB (close MSMQ limit). Is there a way for nService bus to handle large payload and persist messages automatically or another work-around, so that the publisher/subscriber services don't have to worry neither about the payload size, nor about how to de/re-hydrate the payload?
Thank you in advance.
You could use the Message Sequence pattern. In NServiceBus, you would split the payload in the sender, wrap the chunks in a custom 'Sequence' IMessage, and then implement a saga at the other end to extract the chunks & reassemble. You would need to put some effort into error handling & timeouts.
You can always use the quick "fix" of compressing the messages.
A POCO serialized with the binary serializer can be compressed down by a large margin. We saw our messages that were 20mb compressed down to 3.1mb.
So if your messages are hovering around 4mb it might be simple to just write an IMessageSerializer that automatically compresses the message while it is on the wire.
I'm not aware of any internal NServiceBus capability to associate extra data with a message out of band.
I think you're right on the mark - if the entire payload can't fit within the limit, then it's better to persist it elsewhere on your own and then passing an ID.
However, it may be possible for you to design a message structure such that a message could implement an IHasPayload interface (which would perhaps incorporate an ID and a Type?), and then your application logic could have a common method for getting the payload given an IHasPayload message.
I am using activemq to pass requests between different processes. In some cases, I have multiple, duplicate message (which are requests) in the queue. I would like to have only one. Is there a way to send a message in a way that it will replace an older message with similar attributes? If there isn't, is there a way to inspect the queue and check for a message with specific attributes (in this case I will not send the new message if an older one exists).
Clarrification (based on Dave's answer): I am actually trying to make sure that there aren't any duplicate messages on the queue to reduce the amount of processing that is happening whenever the consumer gets the message. Hence I would like either to replace a message or not even put it on the queue.
Thanks.
This sounds like an ideal use case for the Idempotent Consumer which removes duplicates from a queue or topic.
The following example shows how to do this with Apache Camel which is the easiest way to implement any of the Enterprise Integration Patterns, particularly if you are using ActiveMQ which comes with Camel integrated out of the box
from("activemq:queueA").
idempotentConsumer(memoryMessageIdRepository(200)).
header("myHeader").
to("activemq:queueB");
The only trick to this is making sure there's an easy way to calculate a unique ID expression on each message - such as pulling out an XPath from the document or using as in the above example some unique message header
You could browse the queue and use selectors to identify the message. However, unless you have a small amount of messages this won't scale very well. Instead, you message should just be a pointer to a database-record (or set of records). That way you can update the record and whoever gets the message will then access the latest version of the record.