There are two pgms
COBOL CICS (Main PGM) which reads mq queue, triggers transaction and send response back.
Cobol CICS DB2 (Sub PGM) which logs mq details to DB2 table which made in main PGM.
The problem is when at the end of uow, only last inserted data into table(From Sub pgm) were committed but not previously inserted from previous calls.
I tested with explicit commit also but results are same. But if we give syncpoint in PGM 2 which works but eventually collapses other updates done to vsam files from another sub pgms.
Main Pgm (Read MQ, Write MQ) -> Sub Pgm( Logs MQ details (Q/R) to DB2 table) returns control back once inserted.
Any help?
OK -- with EXEC CICS LINK and both programs executing in the same CICS region, then the task automatically has transactionality across all resource managers, MQ as well as Db2. Your program can interfere with this by explicitly issuing the EXEC CICS SYNCPOINT command or the EXEC SQL COMMIT command.
I'm not sure I'm clear on what you mean when you say "triggers the transaction" as an action in your main program. By "transaction" do you mean a unit of work/recovery or do you mean a new CICS task? How is this triggering accomplished?
If your flow is simply:
MQ message arrives
Main program begins execution (triggered by MQ message)
Main program retrieves item from queue
Main program LINKs to subprogram
Subprogram issues INSERT to Db2 table
Subprogram returns to main program
Main program sends MQ reply
Main program retrieves next message from queue
Main program LINKs to subprogram
Subprogram issues INSERT to Db2 table
Subprogram returns to main program
Main program sends MQ reply
-- maybe repeat the GET, LINK, INSERT, RETURN, PUT sequence a couple of times
Main program returns/terminates normally
At this point the Db2 table should have multiple inserted rows.
You could optionally issue a SYNCPOINT/COMMIT after each MQ PUT command to cause the reply to flow immediately and commit the update to the Db2 table. (The original input MQ message is also permanently removed from the queue manager.)
If you are still having a problem with these programs, try asking a colleague to review them to see where you might have introduced an error.
If you think that CICS and/or Db2 are failing, you can open a case to IBM Support to get more assistance.
Related
I have a multi instance application and each instance is multi threaded.
To make each thread only process rows not already fetched by another thread, I'm thinking of using pessimistic locks combined with skip locked.
My database is PostgreSQL11 and I use Spring batch.
For the spring batch part I use a classic chunk step (reader, processor, writer). The reader is a jdbcPagingItemReader.
However, I don't see how to use the pessimist lock (SELECT FOR UPDATE) and SKIP LOCKED with the jdbcPaginItemReader. And I can't find a tutorial on the net explaining simply how this is done.
Any help would be welcome.
Thank you
I have approached similar problem with a different pattern.
Please refer to
https://docs.spring.io/spring-batch/docs/current/reference/html/scalability.html#remoteChunking
Here you need to break job in two parts:
Master
Master picks records to be processed from DB and sent a chunk as message to queue task-queue. Then wait for acknowledgement on separate queue ack-queue once it get all acknowledgements it move to next step.
Slave
Slave receives the message and process it.
send acknowledgement to ack-queue.
I have a batch job and a CICS transaction that use the same db2 tables. Both run at regular intervals and the batch job abends once in a while due to contention with the shared DB2 tables.
Is there a way to schedule the job in CA7 (job scheduling tool) to prevent it from running when the transaction is active?
Disable the CICS transaction before starting the batch job, re-enable it when the batch job ends.
Modify the batch job to use commit intervals, similar to this answer.
Checking to see if the CICS transaction is active is unlikely to behave as you wish. It may be inactive when you check, then you start your batch job, then the CICS transaction becomes active.
Update #1
Though you don't specify, I'm getting the impression this is a long-running CICS transaction and not the normal OLTP-style transaction that finishes in less than 0.10 seconds of clock time.
If this is the case, then creating a batch program that uses the EXCI to execute a CICS program that uses the CICS SPI INQUIRE TASKLIST to locate your transaction may be the way to proceed. If you've got CA-DADs PLUS then you might be able to do this with that product instead of writing programs.
Please refer to the below thread to see whether it helps you in overcoming the issue.
https://ibmmainframes.com/about12949.html
Regards,
Anbu.
In z/OS, I can define a user EMP (Event Monitor Point) in the CICS MCT (Monitor Control Table). For example, one EMP can start a CPU clock/timer and another EMP can halt the CPU clock. I can then "execute" each EMP from my COBOL program during a TASK that runs the program. Execution of EMP "no.1" will start the clock and execution of EMP "no.2" will halt the clock.
I know that the eventual value of the CPU clock will be saved as part of the SMF 110 record that is written after completion of the TASK.
My question is, can the current value of the CPU clock be retrieved in the COBOL program while the TASK is still in execution?
If so, which CICS statement will do this? and into which structure/layout and field will the clock be retrieved?
The reason I wish to know is because I want to measure the CPU time that it takes for a certain process to be performed by the program. The same process may be performed a number of times in one TASK and I want to use the same CPU clock to measure each time that the process is performed.
Thanks very much,
Danny
EXEC CICS COLLECT STATISTICS MONITOR(EIBTASKN) SET (ADDRESS OF DFHMNTDS) can be used to retrieve a running task monitoring fields - as Danny pointed out in comment below.
DFHMCT TYPE=EMP macro with PERFORM=DELIVER may be fit for your purpose. It causes the Performance class data accumulated for this task up to this point delivered to the monitoring buffers. See CICS document:
https://www.ibm.com/support/knowledgecenter/SSGMCP_5.5.0/reference/resources/macros/mct/emp.html
If you are on CICS TS V5.4 or later, you might consider to separate out the process that runs repeatedly to a transaction. Then use 'EXEC CICS RUN TRANSID CHILD' to start the transaction from the current COBOL program/task, which will start the process as a child task with the CPU being measured for it. You can get the response back from the child task using 'EXEC CICS FETCH CHILD'.
For details of using the two APIs please see articles in CICS Developer Center: https://developer.ibm.com/cics/category/asynchronous-api/
Thanks & kind regards,
Jenny (CICS development, IBM Hursley Lab)
Say I have a large set of calls to a procedure to run which have varying parameters but are independent so I want to make parallel/async calls. I use the service broker to fire these all off but the problem I have is I want to know neat ways of knowing how to wait for them all to complete (or error).
Is there a way to do this? I believe I could just loop with waits on the result table checking for completion on that but that isn't very "event triggered". Hoping for a nicer way to do this.
I have used the service broker with queue code and processing based off this other answer: Remus' service broker queuing example
Good day Shiv,
There are several ways (like always) that you can use in order to implement this requirement. One of these is using this logic:
(1) Create two queues: one will be the trigger to execute the main SP that you want execute in Asynchronous, and the other will be the trigger to execute whatever you want to execute after all the executions ended.
(2) When you create the message in the first queue you should also create a message in the second queue, which will only tell us which execution did not ended yet (first queue gives the information which execution started since once we START the execution we use the message and remove it from the queue).
(3) Inside the SP that you execute using the main first queue (this part executed in synchronous):
(3.1) execute the queries you need
(3.2) clear the equivalent message from the second queue (meaning that this message will removed only after the queries ended)
(3.3) check if there are messages in the second queue. If there are no messages then all the tasks ended and you can execute your final step
** Theoretically instead of using the second queue, you can store data in a table, but using second queue should probably give better performance then updating table each time an execution ended. Anyhow, you test the option of using a table as well.
Can we configure Trigger on iSeries DB2 table which call AS400 program after transaction commit?
No.
But you can have any changes made by the trigger program happen within the same commit cycle and be commited or rolled back at the same time as the triggering record.
If your trigger program external in RPGLE, make sure to use DFTACTGRP(*NO) ACTGRP(*CALLER) and that you open the files accessed under commitment control.
Detect a commit on the iseries.
monitor the journal RCVJRNE for the commit transaction.