How to pass run time argument to Itemreader while running Job - spring-batch

I am using rabbit-messaging Queue and Spring batch as combination. Producer service will publish the message to queue. Here my code is consumer for which i am using spring batch to read, process and write. Here when message pushed to queue i have to trigger the job(means no controller end point) . For that purpose i am using #RabbitListener("queue_Name) which will listen when a new message publishes and it also receive message. Below is the code.
#EnableRabbit
public class Eventscheduler {
#Autowired
Job csvJob;
#Autowired
private JobLauncher jobLauncher;
//#Scheduled(cron="0 */5 * ? * *")
#RabbitListener(queues ="BulkSolve_GeneralrequestQueue")
public void trigger(){
Reader.batchstatus=false;
Map<String,JobParameter> maps= new HashMap<String,JobParameter>();
maps.put("time", new JobParameter(System.currentTimeMillis()));
JobParameters jobParameters = new JobParameters(maps);
JobExecution execution=null;
try {
//JobLauncher jobLauncher = new JobLauncher();
execution=jobLauncher.run(csvJob, jobParameters);
} catch (JobExecutionAlreadyRunningException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobRestartException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobInstanceAlreadyCompleteException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobParametersInvalidException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("JOB Executed:" + execution.getStatus());
}
}
Here my problem is here i am already reading published message then how i can pass that message pojo to ItemReader? before triggering job.SO that my ItemReader will read that message. Can any one help me to guide how to achieve this?
Thanks,

From the Javadoc of RabbitListener annotation, your annotated method can have the message as parameter. For example:
#RabbitListener(queues ="BulkSolve_GeneralrequestQueue")
public void trigger(Message message){
// use message as needed
}
You can then access the received message and use it as input to your job.

Related

#Transactional with handling error and db-inserts in catch block (Spring Boot)

I would like to rollback a transaction for the data in case of errors and at the same time write the error to db.
I can't manage to do with Transactional Annotations.
Following code produces a runtime-error (1/0) and still writes the data into the db. And also writes the data into the error table.
I tried several variations and followed similar questions in StackOverflow but I didn't succeed to do.
Anyone has a hint, how to do?
#Service
public class MyService{
#Transactional(rollbackFor = Exception.class)
public void updateData() {
try{
processAndPersist(); // <- db operation with inserts
int i = 1/0; // <- Runtime error
}catch (Exception e){
persistError()
trackReportError(filename, e.getMessage());
}
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void persistError(String message) {
persistError2Db(message); // <- db operation with insert
}
You need the way to throw an exception in updateData() method to rollback a transaction. And you need to not rollback persistError() transaction at the same time.
#Transactional(rollbackFor = Exception.class)
public void updateData() {
try{
processAndPersist(); // <- db operation with inserts
int i = 1/0; // <- Runtime error
}catch (Exception e){
persistError()
trackReportError(filename, e.getMessage());
throw ex; // if throw error here, will not work
}
}
Just throwing an error will not help because persistError() will have the same transaction as updateData() has. Because persistError() is called using this reference, not a reference to a proxy.
Options to solve
Using self reference.
Using self injection Spring self injection for transactions
Move the call of persistError() outside updateData() (and transaction). Remove #Transactional from persistError() (it will not work) and use transaction of Repository in persistError2Db().
Move persistError() to a separate serface. It will be called using a proxy in this case.
Don't use declarative transactions (with #Transactional annotation). Use Programmatic transaction management to set transaction boundaries manually https://docs.spring.io/spring-framework/docs/3.0.0.M3/reference/html/ch11s06.html
Also keep in mind that persistError() can produce error too (and with high probability will do it).
Using self reference
You can use self reference to MyService to have a transaction, because you will be able to call not a method of MyServiceImpl, but a method of Spring proxy.
#Service
public class MyServiceImpl implements MyService {
public void doWork(MyService self) {
DataEntity data = loadData();
try {
self.updateData(data);
} catch (Exception ex) {
log.error("Error for dataId={}", data.getId(), ex);
self.persistError("Error");
trackReportError(filename, ex);
}
}
#Transactional
public void updateData(DataEntity data) {
persist(data); // <- db operation with inserts
}
#Transactional
public void persistError(String message) {
try {
persistError2Db(message); // <- db operation with insert
} catch (Exception ex) {
log.error("Error for message={}", message, ex);
}
}
}
public interface MyService {
void doWork(MyService self);
void updateData(DataEntity data);
void persistError(String message);
}
To use
MyService service = ...;
service.doWork(service);

#KafkaListener : behavior and tracking processing of events

We are using spring-kafka 2.3.0 in our app . Have observed some processing glitches in the scenarios below with
#Service
#EnableScheduling
public class KafkaService {
public void sendToKafkaProducer(String data) {
kafkaTemplate.send(configuration.getProducer().getTopicName(), data);
}
#KafkaListener(id = "consumer_grpA_id",
topics = "#{__listener.getEnvironmentConfiguration().getConsumer().getTopicName()}", groupId = "consumer_grpA", autoStartup = "false")
public void onMessage(ConsumerRecord<String, String> data) throws Exception {
passA(data);
}
private void passB(String message) {
//counter to keep track of retry attempts
if (counter.containsKey(message.getEventID())) {
//RETRY_COUNT = 5
if (counter.get(message.getEventID()) < RETRY_COUNT) {
retryAgain(message);
}
} else {
firstRetryPass(message);
}
}
private void retryAgain(String message) {
counter.put(message.getEventID(), counter.get(message.getEventID()) + 1);
try {
registry.stop(); //pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void firstRetryPass(String message) {
// First Time Entry for count and time
counter.put(message.getEventID(), 1);
try {
registry.stop();//pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void passA(String message) {
try {
passToTarget(message); //Call target processor
LOGGER.info("Message Processed Successfully to the target");
} catch (Exception e) {
targetUnavailable= true;
passB(message);
}
}
private void passToTarget(String message){
//processor logic, if the target is not available, retry after 15 mins, call passB func
}
#Scheduled(cron = "0 0/15 * 1/1 * ?")
public void scheduledMethod() {
try {
if (targetUnavailable) {
registry.start();
firstTimeStart = false;
}
LOGGER.info(">>>Scheduler Running ?>>>" + registry.isRunning());
} catch (Exception e) {
LOGGER.error(e.getMessage());
}
}
}
On receipt of the first message after a gap in processing, the consumer doesn't pick up the first message. The subsequent messages are processed.
As we don't have the direct access to Kafka topics, we aren't able to identify the process that didn't get picked up from consumer.
How do we track those events that arenot picked up and why is it so.?
We also configured a scheduler whose job is to keep the registry for Kafka running . So is this scheduler required when we already have a listener configured ?
What is the mem and CPU utilization metrics if we keep the listener running. That was one of the reason we used the Kafka registry to stop the listener explicitly whenever the target is down. So need to validate if this approach is sustainable. My hunch is this is against the basic working of Listener, as it's main job is to continue listening for new events irrespective of target status
Edited*
You shouldn't stop the registry on the listener thread unless you use stop(Runnable) - otherwise there will be a deadlock and a delay since the container waits for the listener to exit.
Stopping the container (via the registry) won't actually take effect until any remaining records fetched by the last poll have been processed (unless you set max.poll.records=1.
When the listener exits normally, the record's offset will be committed so that record will not be redelivered on the next start.
You can use the ContainerStoppingErrorHandler for this use case. See here.
Throw an exception and the error handler will stop the container for you.
But that will stop the container on the first try.
If you want retries, use a SeekToCurrentErrorHandler and call the ContainerStoppingErrorHandler from the recoverer after retries are exhausted.

Continually restating a job using JobExecutionListener

We need a job to run continuously without the need for an external scheduler.
I have extended JobExecutionListener, as follows:
#Autowired
#Qualifier("myJob")
private Job job;
private int counter = 0;
#Override
public void afterJob(JobExecution jobExecution) {
jobExecution.stop();
JobParameters jobParameters = new JobParameters();
JobParameter jobParameter = new JobParameter((new Integer(++counter)).toString());
jobParameters.getParameters().put("counter", jobParameter);
try {
jobLauncher.run(job, jobParameters);
}
catch (JobExecutionAlreadyRunningException e) {
throw new RuntimeException(e);
} catch (JobRestartException e) {
throw new RuntimeException(e);
} catch (JobInstanceAlreadyCompleteException e) {
throw new RuntimeException(e);
} catch (JobParametersInvalidException e) {
throw new RuntimeException(e);
}
When run, a JobExecutionAlreadyRunningException is thrown.
JobExecutionAlreadyRunningException: A job execution for this job is already running: JobInstance: id=0, version=0, Job=[myJob]
Where am I going wrong?
Thanks
From official doc:
The shutdown is not immediate, since there is no way to force
immediate shutdown, especially if the execution is currently in
developer code that the framework has no control over, such as a
business service. However, as soon as control is returned back to the
framework, it will set the status of the current StepExecution to
BatchStatus.STOPPED, save it, then do the same for the JobExecution
before finishing.
Maybe you can have a chance using a specialized JobLauncher that launch the job after previous job's thread termination or a custom TaskExecutor associated to JobLauncher.

Sending buffered image over socket from client to server

I am trying to send the images captured from client to server,images are captured using robot class and writing to client socket. In server i am reading the buffered image and writing into server local storage area.I want client capture the screenshots at a regular interval and send to server.server reads the images and stores in its repository.
public class ServerDemo {
public static void main(String[] args) {
try {
ServerSocket serversocket=new ServerSocket(6666);
System.out.println("server listening..........");
while(true)
{
Thread ts=new Thread( new ServerThread(serversocket.accept()));
ts.start();
System.out.println("server thread started.........");
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
ServerThread.java
public class ServerThread implements Runnable {
Socket s;
BufferedImage img = null;
String savelocation="d:\\Screenshot\\";
public ServerThread(Socket server) {
this.s=server;
}
#Override
public void run() {
try {
System.out.println("trying to read Image");
img = ImageIO.read(s.getInputStream());
System.out.println("Image Reading successful.....");
} catch (IOException e) {
System.out.println(e);
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
File save_path=new File(savelocation);
save_path.mkdirs();
try {
ImageIO.write(img, "JPG",new File(savelocation+"img-"+System.currentTimeMillis()+".jpg"));
System.out.println("Image writing successful......");
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println(e);
e.printStackTrace();
}
}
}
ClientDemo.java
public class ClientDemo {
public static void main(String[] args) throws InterruptedException {
try {
Socket client=new Socket("localhost", 6666);
while(true)
{
System.out.println("Hello");
Thread th=new Thread(new ClientThread(client));
th.start();
System.out.println("Thread started........");
th.sleep(1000*60);
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
ClientThread.java
public class ClientThread implements Runnable{
Socket c;
public ClientThread(Socket client) {
this.c=client;
}
#Override
public void run() {
try {
System.out.println("client");
//while(true){
Dimension size=Toolkit.getDefaultToolkit().getScreenSize();
Robot robot=new Robot();
BufferedImage img=robot.createScreenCapture(new Rectangle(size));
System.out.println("Going to capture client screen");
ImageIO.write(img, "JPG", c.getOutputStream());
System.out.println("Image capture from client success...!");
} catch (UnknownHostException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (AWTException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Server Console
server listening..........
server thread started.........
trying to read Image
Image Reading successful.....
Image writing successful......
Client console
Hello
Thread started........
client
Going to capture client screen
Image capture from client success...!
Hello
Thread started........
client
Going to capture client screen
Hello
Thread started........
client
Going to capture client screen
Repeat like this.This code works perfectly for first time after that it fails.Each time runs it capture the images only once.What change i have to make to capture and write the images at regular intervals...Please help me
Try this in ClientDemo.java
while(true)
{
System.out.println("Hello");
Socket client=new Socket("localhost", 6666);
Thread th=new Thread(new ClientThread(client));
th.start();
System.out.println("Thread started........");
th.sleep(1000*60);
}
And make sure that you close the client socket once the thread(ClientThread.java) is completed may be in finally block or at the end of code.
You don't need ImageIO for the server end of this. Just send and receive bytes:
while ((count = in.read(buffer()) > 0)
{
out.write(buffer, 0, count);
}
I see the problem is in the server. The first time it accepts a connection from the client,Thread ts=new Thread( new ServerThread(serversocket.accept())); but the client only connects once Socket client=new Socket("localhost", 6666); When the first transfer is completed the server stay again in the accept waiting for the client to make the connect which never happen again. Therefore either you should issue only one accept and use that socket for every transfer or close both sockets, at the client and server, and make the accept/connect again.

EJJB Timer Transaction -XA Exception

I am using EJB 3.0 timer.When my Timeout method gets invoked,I use JPA to insert a record in one of the table.I use JPA to persist the data.I defined the persist code in a Stateless Session Bean and invoked the local interface inside my timeout method.I get the following exception when the thread comes out of the timeout method:
javax.transaction.xa.XAException: JDBC driver does not support XA, hence cannot be a participant in two-phase commit.
To force this participation, set the GlobalTransactionsProtocol attribute to LoggingLastResource (recommended) or EmulateTwoPhaseCommit for the Data Source
Our DB does not support XA transaction.We use WL 10.3.1.Here is the code which i do :
#EJB
private MyejbLocal myejbLocal
#Timeout
public void callEjb(timer) {
try {
myejbLocal .store();
} catch (EntityExistsException e) {
e.getMessage();
} catch (Exception ex) {
ex.getCause();
}
}
Here is my implementation:
#Override
public void Store() {
try {
Mytable mytable= new Mytable (new Date());
persist(mytable);
} catch (EntityExistsException e) {
e.getMessage();
} catch (Exception ex) {
ex.getCause();
}
}
I don't call flush() method.
Please let me know if I have missed any?
I also faced the same issue. You need to keep your JPA entity operation in a separate session bean and it will work.
http://prasunejohn.blogspot.in/2014/02/understanding-ejb-timer-service-31.html