How to identify if there was syncpoint issued - db2

I'm working on COBOL cics program,the program has multiple syncpoint and executed on different scenarios.
I wanted to perform some logic if any point of time under this cics task syncpoint issued or not.
Pls help in advise if there any way around or keywords to check the same.

According to the documentation an implicit syncpoint is issued for...
EXEC CICS CREATE TERMINAL
EXEC CICS CREATE CONNECTION COMPLETE
EXEC CICS DISCARD CONNECTION
EXEC CICS DISCARD TERMINAL
EXEC CICS RETURN at the highest level
a DL/I program specification block (PSB) termination (TERM) call or command
EXEC CICS DISCARD TDQUEUE
EXEC CICS CSD DISCONNECT
EXEC CICS CSD UNLOCK
EXEC CICS CSD APPEND
EXEC CICS CSD REMOVE
EXEC CICS CSD RENAME
EXEC CICS CSD LOCK
EXEC CICS CSD DELETE
EXEC CICS CSD ADD
EXEC CICS CSD INSTALL
EXEC CICS CSD DEFINE
EXEC CICS CSD ALTER
EXEC CICS CSD COPY
EXEC CICS CSD USERDEFINE
EXEC CICS CREATE PROCESSTYPE
EXEC CICS CREATE DUMPCODE
EXEC CICS CREATE PIPELINE
EXEC CICS CREATE DUMPTEMPLATE
EXEC CICS CREATE PARTNER
EXEC CICS CREATE TRANCLASS
EXEC CICS CREATE JOURNALMODEL
EXEC CICS CREATE MAPSET
EXEC CICS CREATE ENQMODEL
EXEC CICS CREATE PROCESSTYPE
EXEC CICS CREATE PARTITIONSET
EXEC CICS CREATE TSMODEL
EXEC CICS CREATE PROFILE
EXEC CICS CREATE DB2TRAN
EXEC CICS CREATE DB2ENTRY
EXEC CICS CREATE TRANSACTION
EXEC CICS CREATE TDQUEUE
EXEC CICS CREATE TYPETERM
Distributed Program Link calls specifying SYNCONRETURN
...and explicitly through an EXEC CICS SYNCPOINT.

So the problem here is that a SYNCPOINT ^should^ be null and void logically. If your program is depending on knowing when a SYNCPOINT happens for any reason, it is likely to fail frequently in production under any sort of load.
Rather than trying to identify when a SYNCPOINT happens, you should be looking for ways to remove that (perceived) dependency from your code. You will be better off in the end not trying to micromanage all the transaction partners and just letting the platform do its job.

Related

dbt restart with defer state picks only last run and ignores previous aborts

Currently we need to run two independent set of models at same time , I am currently looking into restartability for these two independent of each other as they will be monitored by separate teams.if both models abort restart picks only last one from runresults.json
Here is my current solution to restart both parallelly
dbt run --select +modelset2 --target-path .\target\modelset2
dbt run --select +modelset1 --target-path .\target\modelset1
for restartability
dbt run --select result:error --defer --state .\target\modelset2
dbt run --select result:error --defer --state .\target\modelset1
Is this a correct approach ..

Kubectl exec stops/closes after 2-3 minutes

I'm trying to backup a MySQL database running inside a k8s cluster (k3s). This cluster is running locally on a Raspberry Pi. I've built a custom image based on Alpine Linux that contains a script to create the backup using mysqldump:
kubectl exec <pod_name> -n <namespace_name> -- /usr/bin/mysqldump -u <db_user> --password=<db_password> --verbose <db_name> > <file_name>
When I run the mysqldump command from inside the database pod, it completes successfully after 10-15 seconds. But when this command gets executed from inside the Alpine pod it somehow takes a lot longer (2m40s). At that point it stops/aborts the kubectl exec command (due to a timeout?) and the script uploads a corrupt sql file because the mysqldump command wasn't finished with the backup.
Expected verbose output:
-- Connecting to localhost...
-- Retrieving table structure for table _prisma_migrations...
-- Sending SELECT query...
-- Retrieving rows...
-- Retrieving table structure for table database...
-- Sending SELECT query...
-- Retrieving rows...
-- Retrieving table structure for table recycleBin...
-- Sending SELECT query...
-- Retrieving rows...
-- Retrieving table structure for table user...
-- Sending SELECT query...
-- Retrieving rows...
-- Disconnecting from localhost...
Received verbose output:
-- Connecting to localhost...
-- Retrieving table structure for table _prisma_migrations...
-- Sending SELECT query...
-- Retrieving rows...
-- Retrieving table structure for table database...
-- Sending SELECT query...
-- Retrieving rows...
-- Retrieving table structure for table recycleBin...
-- Sending SELECT query...
-- Retrieving rows...
I have 2 questions:
Why does the mysqldump command take so much longer in the Alpine pod compared to the database pod?
Why isn't the kubectl exec command waiting until the mysqldump command has finished taking the backup? Why does it suddenly decide it's time to disconnect and move on?
It is possible that you're getting disconnected because kubectl thinks the connection is dead due to no data from the other side.
Instead of:
kubectl exec <pod_name> -n <namespace_name> -- /usr/bin/mysqldump -u <db_user> --password=<db_password> --verbose <db_name> > <file_name>
Try:
kubectl exec <pod_name> -n <namespace_name> -- /usr/bin/mysqldump -u <db_user> --password=<db_password> --verbose <db_name> | tee <file_name>
This will instead output to stdout as well as sending to the file. This will guarantee that kubectl will see data coming back and will not disconnect you if this is a no-data problem. The downside to this is that you are getting the entire SQL data being pumped back to you on stdout
The other alternative (and one I would personally recommend), is installing something like screen or tmux or another terminal multiplexer and doing the dump in that so you can disconnect from the pod and not worry about kubectl disconnecting you.
EDIT: Just to clarify, I meant installing the multiplexer inside the pod (e.g. inside the docker image you use to create the pod)

Run postgres index creation in background

I want to run index creation without any need for the client to remain connected to the server.
I found this thread, which mentions that the only way to create index in background is
psql -c "CREATE INDEX index_name ON table(column)" &
But this still requires the client to stay connected with the server. Since this is a old thread, would like to know if any new feature has been introduced in postgres which allows such a scenario.
No, this is still not possible.
I would run the command on the server itself, so there can be no network problem:
nohup psql -c "CREATE INDEX CONCURRENTLY ..." </dev/zero >/dev/null 2>&1 &
Then you can disconnect the session, and the command will keep running.

What does **$$** mean when configuring PowerShell Command Line for DB2?

I found this article that shows how you can set up PowerShell to act as your command line for processing DB2 commands.
In the article, it says that you can use the following command to configure PowerShell to run DB2 commands:
Set-Item -Path env:DB2CLP -Value "**$$**"
In the above command, what does the "**$$**" mean?
Thanks!
It has a function, as distinct from a meaning, and the **??** is meant for the Db2 clp (db2.exe). Even if you are not using PowerShell (i.e. you are using db2cmd.exe or cmd.exe) this environment variable can be useful.
It tells the Db2 CLP to configure the current PowerShell session to be able to communicate with the background process db2bp.exe (the communication is IPC based) . Such communication is necessary because it is that background process db2bp.exe which maintains your connection to the database when you run db2 connect to $your_database, or equivalent cmdlet. The db2.exe manages the db2bp.exe so you don't have to worry about it.
The Db2 CLP knows which db2bp.exe it starts for your Powershell session and uses the environment variable DB2CLP as part of that.
Each individual db2 ... command line (or cmdlet) may quickly complete , and will act on the currently connected database, and you can run many db2 commands one after the other, or run scripts - but all the time, it is the background task db2bp.exe that keeps your Db2 connection alive without needing to be reconnected (as long as the Db2 server does not itself end or kill the connection).
The db2bp.exe process will disappear when you run db2 terminate or end the process. You need to run db2 terminate when reconfiguring the node directory, or database directory, or when switching between different Db2-instances that are running on the same hostname, or optionally after db2 connect reset.

How to cancel or terminate long-running query in DB2 CLP without killing CLP?

Assuming I started execution of an inefficient and long-running query in an IBM DB2 database using the CLP. Maybe there are some terrible joins in there that take a lot of processing time in the database. Maybe I don't have the much needed index yet.
# db2 +c -m
db2 => connect to mydb
db2 => select * from view_with_inefficient_long_running_query
//// CLP waiting for database response
How do I cancel the processing of this statement/query without killing DB2 CLP? I can do so by killing it, that is, by pressing Ctrl-C:
db2 => select * from view_with_inefficient_long_running_query
^C
SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
# db2 +c -m
db2 =>
But is there another more elegant way? Perhaps another Ctrl- shortcut? I already saw this question, but it doesn't talk about what I want.
AFAIK, there is not CTRL- to terminate CLP. Your option is to open another terminal session and use the LIST/FORCE applications, CTRL-Z the CLP to suspend it and use another CLP to LIST/FORCE or use a GUI tool like Data Server Manager to find the application and force it to terminate.
db2 list applications for <database>
get the application handle for the session(s) you want to terminate.
db2 force application ( application-handle )
see LIST APPLICATIONS and FORCE APPLICATION.
Would you rather use a QueryTimeout configuration as in https://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0008809.html
This way your command would stop and report SQLSTATE 57013
-913 UNSUCCESSFUL EXECUTION CAUSED BY DEADLOCK OR TIMEOUT.
Rather than use CLP directly, why not go directly from the UNIX prompt?
db2 "select * from view_with_inefficient_long_running_query"
You can hit Ctrl-C to cancel your query. The connection to the database is maintained by the DB2 Backend Process (db2bp), and you get all of the benefits of working in a UNIX shell – superior history, command pipelines, etc.