How to log job log in mongodb using talend - talend

How to log job log whether the job is succeeded or failed into mongodb once the job has been compeleted in talend

If you want to save the joblog into table, then follow the below steps
Main job --> on subjob ok --> fixedflowinput with variables jobname, success then tdbxxoutput..
Main job --> on subjob error --> fixedflowinput with variables jobname, Fail then tdbxxoutput..

Related

Jenkin unable to catch Talend build exemption exit code

I have configured a Jenkins job to call Talend data integration job Build.
The Talend components in job are check boxed with die on error. When talend job fails it displays the error but the jenkins job still shows it as success.
How to catch talend failure exit code in jenkin.
I have enabled die on error for each component in Talend job build
D:\JENKINS-WS\Cloud_Insights\workspace\E2CI-DB-ORACLE-SJ-INTEGRATION\TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI>java -Xms256M -Xmx1024M -cp .;../lib/routines.jar;../lib/activation.jar;../lib/dom4j-1.6.1.jar;../lib/log4j-1.2.16.jar;../lib/mail-1.4.jar;trigger_load_oracle_sj_db_to_e2ci_0_1.jar;load_oracle_sj_db_stg_to_fct_0_1.jar;load_oracle_sj_db_csv_to_stg_0_1.jar;load_oracle_sj_db_stg_to_dim_0_1.jar;load_oracle_sj_db_dim_to_lu_0_1.jar; e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI --context=DEV
tRunJob_1 in TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI call LOAD_ORACLE_SJ_DB_CSV_TO_STG with:
Exception in component tRunJob_1 (TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI)
java.lang.RuntimeException: Child job returns 1. It doesn't terminate normally.
Exception in component tFileList_1 (LOAD_ORACLE_SJ_DB_CSV_TO_STG)
java.lang.RuntimeException: No file found in directory \prod4271\E2CI-DBOPS\IN
at e2ci_db_integration.load_oracle_sj_db_csv_to_stg_0_1.LOAD_ORACLE_SJ_DB_CSV_TO_STG.tFileList_1Process(LOAD_ORACLE_SJ_DB_CSV_TO_STG.java:1421)
at e2ci_db_integration.load_oracle_sj_db_csv_to_stg_0_1.LOAD_ORACLE_SJ_DB_CSV_TO_STG.runJobInTOS(LOAD_ORACLE_SJ_DB_CSV_TO_STG.java:5292)
at e2ci_db_integration.load_oracle_sj_db_csv_to_stg_0_1.LOAD_ORACLE_SJ_DB_CSV_TO_STG.main(LOAD_ORACLE_SJ_DB_CSV_TO_STG.java:5131)
at e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.tRunJob_1Process(TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.java:736)
at e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.runJobInTOS(TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.java:3192)
at e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.main(TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.java:3031)
Triggering a new build of E2CI-DB-ORACLE-CHG-INTEGRATION
Finished: SUCCESS

Camunda Cockpit and Rest API down but application up/JobExecutor config

We are facing a major incident in our Camunda Orchestrator. When we hit 100 running process instances, Camunda Cockpit takes an eternity and never responds.
We have the same issue when calling /app/engine/.
Few messages are being consumed from RabbitMQ, and then everything stops.
The application however is not down.
I suspect a process engine configuration issue, because of being unable to get the job executor log.
When I set JobExecutorActivate to false, all things go right for Cockpit and queue consumption, but processes stop at the end of the first subprocess.
We have this log loop non stop:
2018/11/17 14:47:33.258 DEBUG ENGINE-14012 Job acquisition thread woke up
2018/11/17 14:47:33.258 DEBUG ENGINE-14022 Acquired 0 jobs for process engine 'default': []
2018/11/17 14:47:33.258 DEBUG ENGINE-14023 Execute jobs for process engine 'default': [8338]
2018/11/17 14:47:33.258 DEBUG ENGINE-14023 Execute jobs for process engine 'default': [8217]
2018/11/17 14:47:33.258 DEBUG ENGINE-14023 Execute jobs for process engine 'default': [8256]
2018/11/17 14:47:33.258 DEBUG ENGINE-14011 Job acquisition thread sleeping for 100 millis
2018/11/17 14:47:33.359 DEBUG ENGINE-14012 Job acquisition thread woke up
And this log too (for queue consumption):
2018/11/17 15:04:19.582 DEBUG Waiting for message from consumer. {"null":null}
2018/11/17 15:04:19.582 DEBUG Retrieving delivery for Consumer#5d05f453: tags=[{amq.ctag-0ivcbc2QL7g-Duyu2Rcbow=queue_response}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,4), conn: Proxy#77a5983d Shared Rabbit Connection: SimpleConnection#17a1dd78 [delegate=amqp://guest#127.0.0.1:5672/, localPort= 49812], acknowledgeMode=AUTO local queue size=0 {"null":null}
Environment :
Spring Boot 2.0.3.RELEASE, Camunda v7.9.0 with PostgreSQL, RabbitMQ
Camunda BPM listen and push to 165 RabbitMQ queue.
Configuration :
# Data source (PostgreSql)
com.campDo.fr.camunda.datasource.url=jdbc:postgresql://localhost:5432/campDo
com.campDo.fr.camunda.datasource.username=campDo
com.campDo.fr.camunda.datasource.password=password
com.campDo.fr.camunda.datasource.driver-class-name=org.postgresql.Driver
com.campDo.fr.camunda.bpm.database.jdbc-batch-processing=false
oms.camunda.retry.timer=1
oms.camunda.retry.nb-max=2
SpringProcessEngineConfiguration :
#Bean
public SpringProcessEngineConfiguration processEngineConfiguration() throws IOException {
final SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setDataSource(camundaDataSource);
config.setDatabaseSchemaUpdate("true");
config.setTransactionManager(transactionManager());
config.setHistory("audit");
config.setJobExecutorActivate(true);
config.setMetricsEnabled(false);
final Resource[] resources = resourceLoader.getResources(CLASSPATH_ALL_URL_PREFIX + "/processes/*.bpmn");
config.setDeploymentResources(resources);
return config;
}
Pom dependencies :
<dependency>
<groupId>org.camunda.bpm.springboot</groupId>
<artifactId>camunda-bpm-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.camunda.bpm.springboot</groupId>
<artifactId>camunda-bpm-spring-boot-starter-webapp</artifactId>
</dependency>
<dependency>
<groupId>org.camunda.bpm.springboot</groupId>
<artifactId>camunda-bpm-spring-boot-starter-rest</artifactId>
</dependency>
I am quite sure that my job executor config is wrong.
Update :
I can start cockpit and make Camunda consume messages by setting JobExecutorActivate to false, but processes are still stopping at the first job executor required step:
config.setJobExecutorActivate(false);
Thanks for your help.
First: if your process contains async steps (Jobs) then it will pause. Activating the jobExecutor will just say that camunda should manage how these jobs are worked on. If you disable the executor, your processes will still stop and since no-one will execute them, they remain stopped.
Disabling job-execution is only sensible during testing or when you have multiple nodes and only some of them should do processing.
To your main issue: the job executor works with a threadPool. From what you describe, it is very likely, that all threads in the pool block forever, so they never finish and never return, meaning your system is stuck.
This happened to us a while ago when working with a smtp server, there was an infinite timeout on the connection so the threads kept waiting although the machine was not available.
Since job execution in camunda is highly reliable and well tested per se, I yywould suggest that you double check everything you do in your delegates, if you are lucky (and I am right) you will find the spot where you just wait forever ...

How to stop and resume a spring batch job

Goal : I am using spring batch for data processing and I want to have an option to stop/resume (where it left off).
Issue: I am able to send a stop signal to a running job and it gets stopped successfully. But when I try to send start signal to same job its creating a new instance of the job and starts as a fresh job.
My question is how can we achieve a resume functionality for a stopped job in spring batch.
You just have to run it with the same parameters. Just make sure you haven't marked the job as non-restarted and that you're not using RunIdIncrementer or similar to automatically generate unique job parameters.
See for instance, this example. After the first run, we have:
INFO: Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{}] and the following status: [STOPPED]
Status is: STOPPED, job execution id 0
#1 step1 COMPLETED
#2 step2 STOPPED
And after the second:
INFO: Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{}] and the following status: [COMPLETED]
Status is: COMPLETED, job execution id 1
#3 step2 COMPLETED
#4 step3 COMPLETED
Note that stopped steps will be re-executed. If you're using chunk-oriented steps, make sure that at least the ItemReader implements ItemStream (and does it with the correct semantics).
Steps marked with allowRestartWithComplete will always be re-run.

How spring batch admin is stopping a running job?

How spring batch admin is stopping a running job from the UI .
On the spring batch admin's online documentation i have read the following lines .
"A job that is executing can be stopped by the user (whether or not it
is launchable). The stop signal is sent via the database and once
detected by Spring Batch in whatever process is running the job, the
job is stopped (status moves from STOPPING to STOPPED) and no further
processing takes place."
Does that mean Spring batch admin UI is directly changing the status of job inside the spring batch table ?
UPDATE: I tried executing the below query on the running job .
update batch_job_execution set status="STOPPED" where job_ins
tance_id=19;
The above query is getting updated in the DB but spring batch is not bale to stop the running job.
If anybody has tried this please do share the logic here .
You're confused between Batch Status vs. Exit Status.
What are you doing with that SQL is changed the STATUS to STOPPED
When a job is running you can stop the job from the code. In each step iteration, check their status and if STOPPING its set, then send the step to stop ongoing.
Anyway, what you doing is not elegant. The correct way is explained in Common Batch Patterns -> 11.2 Stopping a Job Manually for Business Reasons
public class FooProcessor implements ItemProcessor<FooIn,FooOut>{
public FooOut process(FooIn foo) throws Exception {
if (sendToStop(item)) {
throw new MyStopException("I need to Stop: " + item);
}
//do my stuff
return new FooOut(foo);
}
}
Another simple way to stop chunk step is return null in the reader. This tells us that no more elements to iterate the reader
public T read() throws Exception {
T item = delegate.read();
if (ifNeedStop(item)) {
return null; // end the step here
}
return item;
}
I investigated the spring batch code.
It seems they update both the version and status of the BATCH_JOB_EXECUTION.
This works for me:
update batch_job_execution set status="STOPPED", version=version+1 where job_instance_id=19;
If you look into the jars of spring batch admin, you can see that in AbstractStep.java(spring-batch admin class) it checks for the status of the Step and Job from Database .
Based on this status it validates step before running it .
This works well for all cases except in chunk, since next step is called after large processing . If you want to implement in it, you can implement your own listener to check status (but it will increase DB hits) .

Spring batch state when steps fails

I'm trying out spring batch. I have seen many examples when running jobs via ItemReader and ItemWriter. If a job runs without errors there is no problem.
But I haven't found out how to handle state when an job fails after processing a number of records.
My scenario is realy simple. Read records from a xml file (ItemReader) and call an external system for storing (ItemWriter). So what happens if the external system is not available in the middle of the process and after a while the job status is set to FAILED? If I restart the job again manually the next day when the external system is up and running I will get duplicates for the previously loaded records.
In some way I must have information for skipping the already loaded records.
I have tried to store a cursor via the ExecutionContext but when I restart the job I get a new JOB_EXECUTION_ID and the cursor data is lost because a get a new line in the BATCH_STEP_EXECUTION_CONTEXT.SHORT_CONTEXT. The BATCH_STEP_EXECUTION.COMMIT_COUNT and BATCH_STEP_EXECUTION.READ_COUNT is also reset when do restart.
I restart the job by using the JobOperator:
jobOperator.restart(jobExecutionId);
Is there a way of restart a job without get a new jobExecutionId or alterntive way of get state of failing jobs. If someone have found (can provide) an example contaning state and error handling I would be happy.
One alternative solution is of course to create my own table that keeps tracks of processed records but I hope really that the framework has a mechanism for this. Otherwise I don't understan the idea with spring-batch.
Regards
Mats
One of the primary features Spring Batch provides is the persistence of the state of a job in the job repository. When a job fails, upon restart, the default behavior is for the job to restart at the step that failed (skipping the steps that have already been successfully completed). Within a chunk based step, most of our readers (the StaxEventItemReader included) store what records have been processed in the job repository (specifically within the ExecutionContext). By default, when a chunk based step fails, it's restarted at the chunk that failed last time, skipping the successfully processed chunks.
An example of all of this would be if you had a three step job:
<job id="job1">
<step id="step1" next="step2">
<tasklet>
<chunk reader="reader1" writer="writer1" commit-interval="10"/>
</tasklet>
</step>
<step id="step2" next="step3">
<tasklet>
<chunk reader="reader2" writer="writer2" commit-interval="10"/>
</tasklet>
</step>
<step id="step3">
<tasklet>
<chunk reader="reader3" writer="writer3" commit-interval="10"/>
</tasklet>
</step>
</job>
And let's say this job completes step1, then step2 has 1000 records to process but fails at record 507. The chunk that consists of records 500-510 would roll back and the job would be marked as failed. The restart of that job would skip step1, skip records 1-499 in step2 and start back at record 500 of step2 (assuming you're using stateful item readers).
With regards to the jobExecutionId on a restart, Spring Batch has the concept of a job instance (a logical run) and a job execution (a physical run). For a job that runs daily, the logical run would be the Monday run, the Tuesday run, etc. Each of these would consist of their own JobInstance. If the job is successful, the JobInstance would end up with only one JobExecution associated with it. If it failed and was re-run, a new JobExecution would be created for each of the times the job is restarted.
You can read more about error handling in general and specific scenarios in the Spring Batch documentation found here: http://docs.spring.io/spring-batch/trunk/reference/html/index.html