What is Pre-online and Post-online triggers - triggers

what is pre-online and post-online triggers?i am not really sure.
i am using for failover management of DB2 9.7 and i need to used pre-online and post-online scripts. Any help would be apperciated.

It appears that those terms refer to part of DB2's support for Microsoft Failover Clustering Support. From the Publib (there is more information on that page):
Pre-online and post-online scripts
You can run scripts both before and
after a DB2 resource is brought online. These scripts are referred to
as pre-online and post-online scripts respectively. Pre-online and
post-online scripts are .BAT files that can run DB2 and system
commands.
In a situation when multiple instances of DB2 might be running on the
same machine, you can use the pre-online and post-online scripts to
adjust the configuration so that both instances can be started
successfully. In the event of a failover, you can use the post-online
script to perform manual database recovery. Post-online script can
also be used to start any applications or services that depend on DB2.
Here is a whitepaper on implementing such scripts, but it's a little older (created for UDB v8.1, which is well out of support), but maybe it will give you something to start from. The scripts are described starting on page 34.

Related

Automate CosmosDB emulator setup

I would like to create scripts to prepare dev CosmosDB emulator with all the databases, containers and index policies. Is there a way to do this?
I saw there is some PowerShell commandlets, but those are just for administrative tasks only. Cosmos Db CLI doesn't seem to have any of needed capabilities as well.
There is great PowerShell module CosmosDB which can help in many ways automating emulator. The only struggle and challenge for me would be to have some kind of automatic transition from Terraform scripts (container names, db setup, indexes and etc) to PowerShell.

Trigger an airflow DAG asynchronous by a database trigger

I want to consolidate a couple of historically grown scripts (Python, Bash and Powershell) which purpose is to sync data between a lot of different database backends (mostly postgres, but also oracle and sqlserver) and on different sites. There isn't really a master, its more like a loose couple of partner companies working on the same domain specific use cases, everyone with its own data silo and its my job to hold all this together as good as I can.
Currently those scripts I mentioned are cron scheduled and need to run on the origin server where a dataset gets initially written, to sync it to every partner over night.
I am also familiar with and use Apache Airflow in another project. So my idea was to use an workflow management tool like Airflow to streamline the sync process and get it more centralized. But also with Airflow there is only a time interval scheduler available to trigger a DAG.
As most writes come in over postgres databases, I'd like to make use of the NOTIFY/LISTEN feature and already have a python daemon based on this listening to any database change (via triggers) and calling an event handler then.
The last missing piece is how its probably best done to trigger an airflow DAG with this handler and how to keep all this running reliably?
Perhaps there is a better solution?

Running automated jobs in PostgreSQL

I have setup a PostgreSQL server and am using PgAdmin 4 for managing the databases/clusters. I have a bunch of SQL validation scripts (.sql) which I run on the databases every time some data is added to the database.
My current requirement is to automatically run these .sql scripts and generate some results/statistics every time new data is added to any of the tables in the database.
I have explored the use of pg_cron (https://www.citusdata.com/blog/2016/09/09/pgcron-run-periodic-jobs-in-postgres/) and pgAgent (https://www.pgadmin.org/docs/pgadmin4/dev/pgagent_jobs.html)
Before I proceed to integrate any of these tools into my application, I wanted to know if it is advisable to proceed using these utilities or if I should employ the service of a full-fledged CI framework like Jenkins?

How to run a group of PowerShell scripts in Azure

I have a group of interdependent .ps1 scripts I want to run in Azure (trying to set up continuous deployment with git, Pester unit tests, etc., as outlined in this blog). How can I run these scripts in azure without needing to manage a server on which those scripts can run? E.g., can I put them in a storage account and execute them there, or do something similar?
Using an Azure automation account/runbook seems to be limited to a single script per runbook (granted, you can use modules, which is insufficient in my case).
Note that I need to use PowerShell version 5+ (I noticed Azure web apps and functions only have 4.x.)
Thanks in advance!
You were on the right track with Azure Functions. However, given that you need v5+ of PowerShell, you may want to look at Azure Container Instances (ACI) instead. It's a little different approach (via containers), but should not impose any limitations and will free you from having to manage a virtual machine.
Note: At this time ACI is in preview. Documentation is available here.
There is a PowerShell container image available on Docker Hub that you could start with. To execute multiple scripts in the container, you can override CMD in the docker file.

SAS - DB2 - connection- coding

Can anyone let me know how to pull data from DB2 using SAS program. I have a DB2 query and want to write SAS code to pull the data from DB2 using the DB2 query. Please share you knowledge in achieving this task.[SAS-Mainframe]. (2) Pointers in connecting to DB2(mainframe) using SAS.
Most likely the issue is with your JCL, not SAS. On the mainframe, jobs are run in lpars (logical partitions). An analogy would be several computers networked together. Each lpar(or computer) would be set up with software and networked to hard drives and db2 servers. Usually one lpar is set aside to run only production jobs, another for development, etc. It is a way to make sure production jobs get the resources they need without development jobs interfering.
In this scenario, each lpar would have SAS installed, but only one partition would be networked to the db2 server you are trying to get your data from. Your JCL would tell the system which lpar to run your job on. Either the wrong lpar is coded in your JCL or your job is running in a default lpar which is not the one your job needs.
The JCL code to run in the correct lpar is customized for each system, so only someone who is running jobs on your system will know what the code is. I suggest going to someone also running jobs your system and tell them as you said 'SAS program without DB2 connectivity is working fine, but otherwise it is not.' They should be able to point you to the JCL code you need.
Good luck.