I have a batch job and a CICS transaction that use the same db2 tables. Both run at regular intervals and the batch job abends once in a while due to contention with the shared DB2 tables.
Is there a way to schedule the job in CA7 (job scheduling tool) to prevent it from running when the transaction is active?
Disable the CICS transaction before starting the batch job, re-enable it when the batch job ends.
Modify the batch job to use commit intervals, similar to this answer.
Checking to see if the CICS transaction is active is unlikely to behave as you wish. It may be inactive when you check, then you start your batch job, then the CICS transaction becomes active.
Update #1
Though you don't specify, I'm getting the impression this is a long-running CICS transaction and not the normal OLTP-style transaction that finishes in less than 0.10 seconds of clock time.
If this is the case, then creating a batch program that uses the EXCI to execute a CICS program that uses the CICS SPI INQUIRE TASKLIST to locate your transaction may be the way to proceed. If you've got CA-DADs PLUS then you might be able to do this with that product instead of writing programs.
Please refer to the below thread to see whether it helps you in overcoming the issue.
https://ibmmainframes.com/about12949.html
Regards,
Anbu.
Related
we have a requirement to process millions of records using spring batch . We have planned to use a Spring Batch to do this by reading the db using JdbcPagingItemReaderBuilder and process in chunks and write it to Kaafka Queue. The active consumers of the queue will process the chunks of data and update the db
The consumer task is to iterate every item from the chunk and invoke the external api's.
In case the external system is down or not responding with success response , there should be retries of atleast 3 times and considering that each task in the chunk has to do this, what would be the ideal approach?
Another use case to consider, what happens when the job is processing and the system goes down and say that the job has already processed 10000 record and the remaining records are yet to be processed . After the restart how to make sure the execution doesnt restart the entire process from beginning and to resume from the point of failure.
Spring Batch creates the following tables. You can use them to check the status of your job and customize your scheduler to behave in a way you see fit.
I'd use the step execution Id in BATCH_STEP_EXCECUTION to validate the status that's set and then retry based off on that status, Or something similar to that sense.
BATCH_JOB_EXECUTION
BATCH_JOB_EXECUTION_CONTEXT
BATCH_JOB_EXECUTION_PARAMS
BATCH_JOB_INSTANCE
BATCH_STEP_EXECUTION
I have a multi instance application and each instance is multi threaded.
To make each thread only process rows not already fetched by another thread, I'm thinking of using pessimistic locks combined with skip locked.
My database is PostgreSQL11 and I use Spring batch.
For the spring batch part I use a classic chunk step (reader, processor, writer). The reader is a jdbcPagingItemReader.
However, I don't see how to use the pessimist lock (SELECT FOR UPDATE) and SKIP LOCKED with the jdbcPaginItemReader. And I can't find a tutorial on the net explaining simply how this is done.
Any help would be welcome.
Thank you
I have approached similar problem with a different pattern.
Please refer to
https://docs.spring.io/spring-batch/docs/current/reference/html/scalability.html#remoteChunking
Here you need to break job in two parts:
Master
Master picks records to be processed from DB and sent a chunk as message to queue task-queue. Then wait for acknowledgement on separate queue ack-queue once it get all acknowledgements it move to next step.
Slave
Slave receives the message and process it.
send acknowledgement to ack-queue.
In z/OS, I can define a user EMP (Event Monitor Point) in the CICS MCT (Monitor Control Table). For example, one EMP can start a CPU clock/timer and another EMP can halt the CPU clock. I can then "execute" each EMP from my COBOL program during a TASK that runs the program. Execution of EMP "no.1" will start the clock and execution of EMP "no.2" will halt the clock.
I know that the eventual value of the CPU clock will be saved as part of the SMF 110 record that is written after completion of the TASK.
My question is, can the current value of the CPU clock be retrieved in the COBOL program while the TASK is still in execution?
If so, which CICS statement will do this? and into which structure/layout and field will the clock be retrieved?
The reason I wish to know is because I want to measure the CPU time that it takes for a certain process to be performed by the program. The same process may be performed a number of times in one TASK and I want to use the same CPU clock to measure each time that the process is performed.
Thanks very much,
Danny
EXEC CICS COLLECT STATISTICS MONITOR(EIBTASKN) SET (ADDRESS OF DFHMNTDS) can be used to retrieve a running task monitoring fields - as Danny pointed out in comment below.
DFHMCT TYPE=EMP macro with PERFORM=DELIVER may be fit for your purpose. It causes the Performance class data accumulated for this task up to this point delivered to the monitoring buffers. See CICS document:
https://www.ibm.com/support/knowledgecenter/SSGMCP_5.5.0/reference/resources/macros/mct/emp.html
If you are on CICS TS V5.4 or later, you might consider to separate out the process that runs repeatedly to a transaction. Then use 'EXEC CICS RUN TRANSID CHILD' to start the transaction from the current COBOL program/task, which will start the process as a child task with the CPU being measured for it. You can get the response back from the child task using 'EXEC CICS FETCH CHILD'.
For details of using the two APIs please see articles in CICS Developer Center: https://developer.ibm.com/cics/category/asynchronous-api/
Thanks & kind regards,
Jenny (CICS development, IBM Hursley Lab)
We have a job on our SQL database that runs periodically forever.
During predefined maintenance periods, we would like to have this job stop for a set time (say 12 hours) and then restart the regular periodic schedule.
We've tried using a separate job that disables it a the predefined time and a second one that enables it. This works but is not very neat.
Is there a better way to do this that only involves the job itself?
Make a "maintenance schedule" table in some service database or MSDB (StartDate, EndDate, Description, etc.). Let the first step of your job check if current datetime within maintenance period. If so, just do nothing.
If a session or transaction is associated with the maintenance process then you could use an application lock to have the regular job wait, or terminate, if it attempts to run while the maintenance is in process.
Using a locking mechanism allows finer control over the processes, e.g. the regular job can release and reacquire the lock between steps and wait (or terminate) if the maintenance process has started. Alternatively, the maintenance process could wait for the regular job to terminate (or reach a suitable checkpoint) before proceeding.
See sp_getapplock for additional information.
I have a Perl script that I'm attempting to set up using Perl Threads (use threads). When I run simple tests everything works, but when I do my actual script (which has the threads running multiple SQLPlus sessions), each SQLPlus session runs in order (i.e., thread 1's sqlplus runs steps 1-5, then thread 2's sqlplus runs steps 6-11, etc.).
I thought I understood that threads would do concurrent processing, but something's amiss. Any ideas, or should I be doing some other Perl magic?
A few possible explanations:
Are you running this script on a multi-core processor or multi-processor machine? If you only have one CPU only one thread can use it at any time.
Are there transactions or locks involved with steps 1-6 that would prevent it from being done concurrently?
Are you certain you are using multiple connections to the database and not sharing a single one between threads?
Actually, you have no way of guaranteeing in which order threads will execute. So the behavior (if not what you expect) is not really wrong.
I suspect you have some kind of synchronization going on here. Possibly SQL*Plus only let's itself be called once? Some programs do that...
Other possiblilties:
thread creation and process creation (you are creating subprocesses for SQL*Plus, aren't you?) take longer than running the thread, so thread 1 is finished before thread 2 even starts
You are using transactions in your SQL scripts that force synchronization of database updates.
Check your database settings. You may find that it is set up in a conservative manner. That would cause even minor reads to block all access to that information.
You may also need to call threads::yield.