JMS Outbound Gateway - Receiving Replies from two jobs instances - spring-batch

We are using the JMSOutboundGateway to send message and receive message using the reply channel within the JMSOutboundGateway. When we run multiple iterations of the same job using the same JMSOutboundGateway, it fails with this error "Message contained wrong job instance id [85] should have been [86]" ( org.springframework.batch.integration.chunk.ChunkMessageChannelItemWriter.getNextResult() ) .
This is due to same JMSOutBoundGateway instance being using when I run the second when the first job is still in progress.
Is there a way I can run parallel execution of the same job type ?

This is a known issue, see https://github.com/spring-projects/spring-batch/issues/1372 and https://github.com/spring-projects/spring-batch/issues/1096.
The workaround is to use a separate instance of the writer for each job to prevent sharing the same reply channel.

Related

Laravel 8 "Queue::push" is working, but "dispatch" is not

I'm facing an issue with Laravel queued jobs.
I'm using Laravel v8.40.0 with Redis v6.2.5 and Horizon v5.7.14 for managing jobs.
I have a job class called MyJob which should write a message in log file.
If I use Queue::push(new MyJob()) everything seems to work fine: I see the job in Horizon and the new row in log file.
But if I use dispatch(new MyJob()) or MyJob::dispatch() it doesn't seem to push my job into queue: I can't see the job in Horizon and I see no results in log file.
I was following the docs (https://laravel.com/docs/8.x/queues#dispatching-jobs) to use queues correctly and I don't understand where I'm doing wrong.
Thank you

Sarama ClusterAdmin connection issue - broken pipe

I am using sarama(1.27) ClusterAdmin to manage topics in kafka1.1.0. My application that manages kafka topics, is running as a REST service. My application runs fine for a while and I can get/create/delete topic.
But after some time elapses without any activity, a new topic request gets error - write tcp xxxxx:37888->xxxxx:9092: write: broken pipe.
I came across this How to fix broker may not be available after broken pipe.
Since my application is running as a service, how do I prevent broken pipe issue ? I close ClusterAdmin only when application exits. Same ClusterAdmin connection is used to serve all requests. I reinitialize clusterAdmin for each request if for any reason it is nil(Usually it is not nil after first initialization, so same connection is reused).
Should I close clusteradmin after each request is served and open a NewClusterAdmin() for each topic request, or is there a keepalive option that I need to use?
Here is my existing code:
if admin == nil{
admin, err := NewClusterAdmin([]string{"localhost:9092"}, s.config)
..
}
topicMetadata, err := admin.DescribeTopics([]string{topicName})
I also came cross this error. My way to fix this question is try again several times, e.g. 2 to 10 times.

How to define contract for both messaging and http API using sping-contract

I have a situation where there are 2 services. Service A is exposing query API through HTTP endpoint and also is listening for incomming asynchronous command messages (service A owns both of CQRS contracts).
Service B is using both endpoints of service A: to GET data and to invoke commands.
While implementing contract (stub and tests) for HTTP flow is quite simple, configuring messaging part is a tricky for me and actually I've stucked at this one.
Docs says that there is publisher side test generation what is suitable for publishing event case where publisher owns the contract.
But how to makes it working for situation where message consumer owns the contract??
I can't figure out any solution on that one as I need to have a stub used in service A to verify if service A is properly consuming commands messages and also I need genereated tests on service B that will verify that service B if it is producing compliant command message.
I'd appreciate any help.
Many thanks in advance.
Service A is the producer of the API and the consumer of messages. It owns only contracts for HTTP. The messaging contracts are owned by Service B. Service B is the producer of messages. You should have an HTTP contract defined on the Service A side and a Stub Runner test to test if it can receive the message sent by Service B. Service B should have the messaging contract to assert whether the message is properly sent and Stub Runner test for HTTP
That might lead to a dependency cycle. If you have a cycle between your apps then, yeah, what you have to do is ignore a stub runner test on one side until the jars got uploaded.
You've asked about storing contracts in a separate repository. You can do it - here are the docs https://cloud.spring.io/spring-cloud-static/Edgware.SR3/multi/multi__spring_cloud_contract_faq.html#_common_repo_with_contracts and here is an example https://github.com/spring-cloud-samples/spring-cloud-contract-samples/tree/master/beer_contracts
You've asked about not generating the tests for some reason (IMO that's a wrong thing to do). You can not use <extensions>true</extensions> in Maven but manually provide which goals you want to execute (omit the test generation). In Gradle just disable generateContractTests task AFAIR

Spring Batch :- Running Job in two stages

Here is the spring batch design for job recommending to my client:-
UI application will call Rest API on API server. Rest API creates a unique id , and send unique id, job params, job name as jms message to some batch server. Rest API sends the unique token id to UI.
JMS message listener on batch server create a new spring batch job instance and set up unique id as job param and run the job.
UI keeps on polling the Status Rest API by passing the unique token.
Rest API finds from job param table for unique id, job execution id and provide the job status to UI.
Please advise, any suggestion so that we can create the job on API server, so that job instance and param created but does not execute any steps. We know the job execution id
On batch server with input job execution id, we can rerun/resume the job again.
I think you could use a JobRequest-table to store the initial request with the initial unique token. You could pass this token to the JMS server along with the batch request. As soon as the JMS batch server creates/starts the request it should also update the JobRequest-table and insert the actual Batch-JobInstance-Id.
So clients can ask the status by using the unique token. Rest-API server can use that token to look up the job id and get information about the job progress
I was able to do this by providing my own incrementer factory while creating the Job Repository bean. Now I can get the job execution from Seq at UI layer and use same while running job. Job execution id is stored in thread local at batch server. I am overrirding the getNextKey() method from OracleSequenceMaxValueIncrementer
I was debugging the Spring batch code. I came with this approach. But now trying to worked it out, see this works :-
My client using oracle DB. Spring Batch provides this property in batch-oracle.properties file.
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.OracleSequenceMaxValueIncrementer
I will override the batch.database.incrementer.class property by
client specific incrementer class by subclassing OracleSequenceMaxValueIncrementer and override getNextKey() abstract method.
On API server, I will call only job execution sequence, get the id and pass same in jms message.
On batch server, store the job execution id in some thread local. getNextKey() method check if incrementer name is JOB EXEC SEQ, get the id from thread local else create the way spring batch is doing it. getNextKey() will be called during job execution creation by spring batch. For other tables sequence, this will not cause issue as incrementer name will be diff.

MessageQueue.Exists(QueueName) returns false but it exists

The problem I'm having is with this code:
if (!MessageQueue.Exists(QueueName))
{
MessageQueue.Create(QueueName, true);
}
It will check if a queue exists; if it doesn't I want it to create the queue. This code has been working and hasn't changed for a few months. Today I started receiving this error:
[MessageQueueException (0x80004005): A queue with the same path name
already exists.] System.Messaging.MessageQueue.Create(String path,
Boolean transactional) +239478
The queues are local and if I delete the specific queue it will work once. After the queue is created it starts to fail again with the same error message.
It looks like the issue may be because of the Network Load Balancing (NLB) configuration. I was unaware of a change that recently put the machine in a NLB environment. The configuration we are using is an unsupported one.
More information is in How Message Queuing can function over Network Load Balancing (NLB).