Spring cloud contract for message producer - spring-cloud

I am using spring cloud contract for messages as described in
https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/1.2.1.RELEASE/single/spring-cloud-contract.html#_spring_cloud_contract_verifier_messaging
Everything works as described in the documentation.
I have one situation where my triggeredBy method raises two messages on same channel (e.g. SMS to be sent to two different parties) and I am not able to assert both the messages correctly. The messages are received in random order and sometimes the test cases pass and sometimes it fails.
I need a way to assert both the messages correctly.

The OutputMessage has also the assertThat method (https://github.com/spring-cloud/spring-cloud-contract/blob/v1.2.1.RELEASE/spring-cloud-contract-spec/src/main/groovy/org/springframework/cloud/contract/spec/internal/OutputMessage.groovy#L35), it's described here (https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/1.2.1.RELEASE/single/spring-cloud-contract.html#contract-dsl-common). You can assert also the other message there. You can also use that assertion in the input part to know exactly which message was sent and then you can send the missing one too.

Related

Is there a way to explicitly acknowledge message receipt with QuickFIX/J?

For a guaranteed message receiver, in an ACK-based protocol like Apache Kafka, TIBCO EMS/RVCM, IBM MQ and JMS there is a way to explicitly acknowledge the receipt of a message. Explicit Acks are not just automatically sent when you return from a dispatcher's callback but an extra method on the session or message to say "I've processed this message". The reason for the existence of this explicit ack is that you can safely queue received messages to be processed by another thread at a later time and then only call this explicit-ack method once your are really done processing this message (safely storing to DB, forwarding to another MOM, etc.) Having this explicit method ensures that you are not losing messages even when you crash after receiving messages but didn't process them yet.
Now with QuickFIX/J (of FIX in general) I know it's not ACK-based but instead persists the last received SeqNum in a file and instead of sendings Acks, message guarantee is achieved by sending ResendRequests for missed SeqNums. But still, is there a way to tell the QuickFIX/J API "I don't automatically want you to persist this last SeqNum once I exit this onMessage() callback but hold off until I tell you so". In other words is there a Session variation which doesn't persist SeqNums automatically and then I can call something on the FIX message to persist this last Seqnum once I've really processed/saved that message ?
(If this feature doesn't exist I think it would be a good addition to the API)

Avoid Data Loss While Processing Messages from Kafka

Looking out for best approach for designing my Kafka Consumer. Basically I would like to see what is the best way to avoid data loss in case there are any
exception/errors during processing the messages.
My use case is as below.
a) The reason why I am using a SERVICE to process the message is - in future I am planning to write an ERROR PROCESSOR application which would run at the end of the day, which will try to process the failed messages (not all messages, but messages which fails because of any dependencies like parent missing) again.
b) I want to make sure there is zero message loss and so I will save the message to a file in case there are any issues while saving the message to DB.
c) In production environment there can be multiple instances of consumer and services running and so there is high chance that multiple applications try to write to the
same file.
Q-1) Is writing to file the only option to avoid data loss ?
Q-2) If it is the only option, how to make sure multiple applications write to the same file and read at the same time ? Please consider in future once the error processor
is build, it might be reading the messages from the same file while another application is trying to write to the file.
ERROR PROCESSOR - Our source is following a event driven mechanics and there is high chance that some times the dependent event (for example, the parent entity for something) might get delayed by a couple of days. So in that case, I want my ERROR PROCESSOR to process the same messages multiple times.
I've run into something similar before. So, diving straight into your questions:
Not necessarily, you could perhaps send those messages back to Kafka in a new topic (let's say - error-topic). So, when your error processor is ready, it could just listen in to the this error-topic and consume those messages as they come in.
I think this question has been addressed in response to the first one. So, instead of using a file to write to and read from and open multiple file handles to do this concurrently, Kafka might be a better choice as it is designed for such problems.
Note: The following point is just some food for thought based on my limited understanding of your problem domain. So, you may just choose to ignore this safely.
One more point worth considering on your design for the service component - You might as well consider merging points 4 and 5 by sending all the error messages back to Kafka. That will enable you to process all error messages in a consistent way as opposed to putting some messages in the error DB and some in Kafka.
EDIT: Based on the additional information on the ERROR PROCESSOR requirement, here's a diagrammatic representation of the solution design.
I've deliberately kept the output of the ERROR PROCESSOR abstract for now just to keep it generic.
I hope this helps!
If you don't commit the consumed message before writing to the database, then nothing would be lost while Kafka retains the message. The tradeoff of that would be that if the consumer did commit to the database, but a Kafka offset commit fails or times out, you'd end up consuming records again and potentially have duplicates being processed in your service.
Even if you did write to a file, you wouldn't be guaranteed ordering unless you opened a file per partition, and ensured all consumers only ran on a single machine (because you're preserving state there, which isn't fault-tolerant). Deduplication would still need handled as well.
Also, rather than write your own consumer to a database, you could look into Kafka Connect framework. For validating a message, you can similarly deploy a Kafka Streams application to filter out bad messages from an input topic out into a topic to send to the DB

How to respond with multiple messages in a sync (two-way) camel route?

I'd like to model an Apache Camel route that accepts tcp requests containing xml messages.
Each message may result in a multitude of responses which should be sent back on the incoming socket. I've played around with the camel-netty component in sync mode which works for single messages.
But is it possible to send back multiple messages on the socket? Basically a split before the return.
from(String.format("netty:tcp://0.0.0.0:%s?sync=true&decoders=#length-decoder,#string-decoder&encoders=#string-encoder,#length-encoder", INBOUND_PORT))
.id("my-mock")
.unmarshal(jaxbDataFormat)
.process(exchange -> {
List<String> responses = service.accept(exchange.getIn().getBody(MyXmlRootElement.class));
exchange.getOut().setBody(responses);
})
.split().body() //Split is not doing what it should. Should become multiple messages, and each should be returned with a delay
.delay(2000);
My messages are length-encoded containing an integer at first 4 bytes specifying the length of each individual message.
In my case the exception is IllegalArgument, stating that the endpoint does not support ArrayList as the payload.
Caused by: [java.lang.IllegalArgumentException - unsupported message type: class java.util.ArrayList]
at org.apache.camel.component.netty.handlers.ServerResponseFutureListener.operationComplete(ServerResponseFutureListener.java:53) ~[camel-netty-2.16.0.jar:2.16.0]
at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:409) [netty-3.10.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:395) [netty-3.10.4.Final.jar:na]
Cheers.
That is not how its designed, the sync option on netty is for sending one response message when the route ends.
I have designed this as well for single messages and that works. For multiple response messages, you could try to aggregate them as one and send that back to the client. Assuming off course aggregation is possible in your case.

WIthout using the JMS Wrapper how to emulate JMS topic w/ HornetQ core API

I would like to translate the concept of JMS topics using HornetQ core API.
The problem i see from my brief examination it would appear the main class JMSServerManagerImpl (from hornetq-jms.jar) uses jndi to coordinate the various collaborators it requires. I would like to avoid jndi as it is not self contained and is a globally shared object which is a problem especially in an osgi environment. One alternative is to copy starting at JMSServerManagerImpl but that seems like a lot of work.
I would rather have a confirmation that my approach to emulating how topics are supported in hornetq is the right way to solve this problem. If anyone has sufficient knowledge perhaps they can comment on what i think is the approach to writing my own emulation of topics using the core api.
ASSUMPTION
if a message consumer fails (via rollback) the container will try deliverying the message to another different consumer for the same topic.
EMULATION
wrap each message that is added for the topic.
sender sends msg w/ an acknowledgement handler set.
the wrapper for (1) would rollback after the real listener returns.
the sender then acknowledges delivery
I am assuming after 4 the msg is delivered after being given to all msg receivers. If i have made any mistakes or my assumptions are wrong please comment. Im not sure exactly if this assumption of how acknowledgements work is correct so any pointers would be nice.
If you are trying to figure out how to send a message to multiple consumers using the core API; here is what I recommend
Create queue 1 and bind to address1
Create queue 2 and bind to address1
Make queue N and bind to address 1
Send a message on address1
Start N consumers where each consumer listens on queue 1-N
This way it basically works like a topic.
http://hornetq.sourceforge.net/docs/hornetq-2.0.0.BETA5/user-manual/en/html/using-jms.html
7.5. Directly instantiating JMS Resources without using JNDI

Replacing a message in a jms queue

I am using activemq to pass requests between different processes. In some cases, I have multiple, duplicate message (which are requests) in the queue. I would like to have only one. Is there a way to send a message in a way that it will replace an older message with similar attributes? If there isn't, is there a way to inspect the queue and check for a message with specific attributes (in this case I will not send the new message if an older one exists).
Clarrification (based on Dave's answer): I am actually trying to make sure that there aren't any duplicate messages on the queue to reduce the amount of processing that is happening whenever the consumer gets the message. Hence I would like either to replace a message or not even put it on the queue.
Thanks.
This sounds like an ideal use case for the Idempotent Consumer which removes duplicates from a queue or topic.
The following example shows how to do this with Apache Camel which is the easiest way to implement any of the Enterprise Integration Patterns, particularly if you are using ActiveMQ which comes with Camel integrated out of the box
from("activemq:queueA").
idempotentConsumer(memoryMessageIdRepository(200)).
header("myHeader").
to("activemq:queueB");
The only trick to this is making sure there's an easy way to calculate a unique ID expression on each message - such as pulling out an XPath from the document or using as in the above example some unique message header
You could browse the queue and use selectors to identify the message. However, unless you have a small amount of messages this won't scale very well. Instead, you message should just be a pointer to a database-record (or set of records). That way you can update the record and whoever gets the message will then access the latest version of the record.