Apache Camel XA rollback and store on failure queue - jpa

I am starting to think this is impossible now so hopefully somebody can give me some guidance.
In Short, I have a Springboot application running apache camel routes, XA is configured using Atomikos and as far as I can tell all the XA specific configuration is as it should be. When a route executes I can see a message removed from the jms queue, a database insert is executed using a #Transacted JPA component and the message is routed to an output jms queue. All fine and I can see log information that supports the transaction manager committing both jms and JPA bits.
The issue comes when I have an exception, I need to be able to attempt re delivery 3 times and if that fails route the message on to a failure queue but not before the database insert is be rolled back.
I have a configured TransactionErrorHandlerBuilder which is setting the redelivery count to 3 and also managing the RedeliveryDelay, I can see all of that working but I never manage to divert the message after 3 delivery attempts to the route that I have setup to deliver to the failure queue. I have set the DeadLetterUri to point to the route but it seems that the transactionErrorHandler never makes use of it, camel just tries to redeliver the message 3 times over and over again until I kill the route.
Is what I am asking not supported? I am really hoping I am missing something obvious.
(Camel 2.19)
(SpringBoot 1.5.3)
thanks
Paul

Related

Spring Apache Kafka decoupled message queue best practice

I have a problem regrading renewing an old interface of data exporting to a client of ours.
13 years ago,
A queue logic was developed in an Oracle package (Fully SQL code) - the data is inserted into a queue table, and than exported via an IIS server built in C#, which functionality is basic:
The IIS server is exporting an HTTP get request, which is polled 5 times per second (Which in terms of 13 years ago was online polling).
Every request removes the first record (FIFO) of the queue table, and sends it to the client.
Today, We have a Spring server which saves the data to the Oracle DB.
When coming to renew this interface, I've encountered issues regarding on how to do it.
My main idea is using Apache Kafka which has great connection to Spring & some good documentation, In order to achieve this online message queue.
My main problem is the high coupling the consumer needs to have - From what I understood this far, the consumer needs to develop an architecture built specifically for the Kafka client, which is something I want to prevent - I want to build a service layer on our side which will export the Kafka message queue.
I've also thought about using Apache NiFi for this and managing a queue there.

Is it possible to implement transactional outbox pattern for only RabbitMQ publish fail scenarios

I have a system that uses mongoDB as persistence and RabbitMQ as Message broker. I have a challenge that I only want to implement transactional outbox for RabbitMQ publish fail scenarios. I'm not sure is it possible because, I also have consumers that is using same mongoDB persistence so when I'm writing a code that covers transactional outbox for RabbitMQ publish fail scenarios, published messages reaching consumers before mongoDB commitTransaction so my consumer couldn't find the message in mongoDB because of latency.
My code is something like below;
1- start session transaction
2- insert into document with session (so it doesn't persist until I call commit)
3- publish rabbitMQ
4- if success commitTransaction
5- if error insert into outbox document with session than commitTransaction
6- if something went wrong on mongoDB abortTransaction (if published succeed and mongoDB failed, my consumers first check for mongoDB existence and if it doesn't exist don't do anything.)
So the problem is in here messages reaching consumer earlier than
mongoDB persistence, do you advice any solution that covers my
problem?
As far as I can tell the architecture outlined in the picture in https://microservices.io/patterns/data/transactional-outbox.html maps directly to MongoDB change streams:
keep the transaction around 1
insert into the outbox table in the transaction
setup message relay process which requests a change stream on the outbox table and for every inserted document publishes a message to the message broker
The publication to message broker can be retried and the change stream reads can also be retried in case of any errors. You need to track resume tokens correctly, see e.g. https://docs.mongodb.com/ruby-driver/master/reference/change-streams/#resuming-a-change-stream.
Limitations of this approach:
only one message relay process, no scalability and no redundancy - if it dies you won't get notifications until it comes back
Your proposed solution has a different set of issues, for example by publishing notifications before committing you open yourself up to the possibility of the notification processor not being able to find the document it got out of the message broker as you said.
So I would like to share my solution.
Unfortunately it's not possible to implement transactional outbox pattern for only fail scenarios.
What I decided is, create an architecture around High Availability so;
MongoDB as High Available persistence and RabbitMQ as High Available message broker.
I removed all session transactions that I coded before and implemented immediate write and publish.
In worst case scenario:
1- insert into document (success)
2- rabbitmq publish (failed)
3- insert into outbox (failed)
What will I have is, unpublished documents in my mongo. Even in worst case scenario I could re publish messages from MongoDB with another application but I'll not write that application until I'll face with that case because we can not cover every fail scenarios on our code. So our message brokers or persistences must be high available.

How to read value of max-delivery-attempts from Java in a MessageListener

I have configured the redelivery settings in Wildfly 10 configuration some thing like below.
<address-setting name = "jms.queue.MyQueue"
redelivery-delay="2000" max-redelivery-delay="10000" max-delivery-attempts="5"
max-size-bytes="10485760" address-full-policy="FAIL"/>
I haven't configured the DLQ which I want to do myself.
When a message fails , I would like to move it to certain queue with the error in it. Unfortunately if I configure the DLQ, I only get the original message but not the reason why it failed.
For that I would like to read the JMSXDeliveryCount and decide if this is the last attempt. If so then Move it to some other queue myself with additional information.
is it possible to read the original setting as done in standalone-full.xml from my Queue while consuming the message?
The max-delivery-attempts setting is not defined in the JMS specification so in order to retrieve it from the server you'll need to use the Wildfly management API. There are a couple of ways to do this - native or HTTP. To be clear, this will make your application difficult to port to other potential JMS providers and/or Java application servers.
To avoid having to use the Wildfly management API you might consider setting a special property on the message from the producer to indicate how many times it should be delivered. Then you could just read this property in your consumer application and compare it to JMXSDeliveryCount. If you don't want to change the producer application you could probably accomplish the same thing using an Artemis outgoing interceptor to set the property on the message as it's being delivered to the consumer.

Recovering messages from rabbitmq queue in mule

I am very much new to Mule ESB.
I have created a flow which has multiple queues(I am using RabbitMQ). The flow is something like some messages are put into first queue which will be read by second queue which will be read be third and so on.
Note: I am sending messages concurrently using JMeter.
Let's say before all message(s) can be put into third queue from second queue, my rabbitmq server is stopped. Now, in this case, I want to recover my messages. Also, I should be able to know what messages have been put into third queue and what are still left.
I might not have put my question in elegant or understandable way but i hope you understood what i want to achieve.
You can use Rollback Exception strategy (http://www.mulesoft.org/documentation/display/current/Rollback+Exception+Strategy) along with transactions: http://www.mulesoft.org/documentation/display/current/Transaction+Management when properly implemented messages that have not been delivered to the second queue will be rolled back automatically.
In rollback exception strategy you can write your custom behavior. Why don't you use rabbitmq client to see what messages where in third queue?

Implementing Hornetq

I need some clarity on the right approach of implementing JMS in our system.
Currently we have two JBoss servers in load-balanced for end user transaction purpose, we are extending the notification features based on various event in the transaction. To make it work have decided to use following approach, hornetQ will be embedded in transaction Jboss servers and MDB will be attached in same JBoss server to listend and call another JBoss server which will have some business code to categorize the users to be sent and finally that server will make a call to XMPP server with appropriate users.
Here my doubt is, deploying MDB (event consumer) in transaction JBoss server is good approach or move the MDB to JBoss server dedicated for notification pupose. Please throw some idea for better approach.
Regards,
Vairam
As I said earlier your question here is poorly written, but I'm really trying to help you...
so, it's always a good choice to deploy MDBs to process transactions instead of using a database directly as you are going to do the TX asynchronously.
When you send data from one MDB to another application server, you can have both operations as part of the same TX, hence you can make usage of XA to make sure the process of the Message and whatever is done afterwards (another message send, another EJB call or another DB operation) would be done as part of the same TX.
If you need more help, please re-edit your question making sure you are using the right terminology. I don't think you're having a language barrier.. you're just using wrong terms.. like, you can't embed HornetQ in a Transaction, that's just something that doesn't exist.
Your question is a bit confusing to be understood. How can you deploy a MDB in a transaction? You deploy a MDB on an application server.
Your question is not making much sense. Perhaps it's a language barrier?