Backup Oracle Physical Standby through DBMS_SCHEDULER job - oracle12c

Friends!
We have a lot of primary databases with their physical standby databases in Data Guard configurations on servers. Each primary database on single server and each physical standby on single server.
In EM12c we've configured scheduler jobs for backup our primary databases. Unfortunately, when server is really busy, Agent suspends backup execution and we haven't backup according to out schedule.
So, we disabled our backup jobs from EM12c and want to perform backups on Physical standby using procedure DBMS_SCHEDULER.CREATE_JOB.
As Physical Standby is read only database and per-block copy of Primary Database, I have to create schedule job on Primary and it applied to standby.
So, the question is: Is it possible? And if, yes - how to realize this in script??
Something like this:
check database_role
if role='PHYSICAL STANDBY'
then execute backup script
else nothing to do..
If it's not possible, which solution is the best for resolve this task?
Is there a way to solve this problem without create cron task with single script on each server? Is it possible to use one global script from recovery catalog database?
Kris said, that I can't run scheduled jobs from physical standby database.
So, I'll schedule my linux script with crontab.
My linux script is:
#! /usr/bin/bash
LOG_PATH=/home/oracle/scripts/logs; export LOG_PATH
TASK_NAME=backup_database_inc0; export TASK_NAME
CUR_DATE=`date +%Y.%m.%d-%H:%M`; export CUR_DATE
LOGFILE=$LOG_PATH/$TASK_NAME.$CUR_DATE.log; export LOGFILE
rman target / catalog rmancat/<pswd>#rmancat script 'backup_database' log $LOGFILE
if [ $? -eq 0 ]
then
mail -s "$ORACLE_UNQNAME Backup Status: SUCCESS" dba#server.mail< $LOGFILE
exit 0
else
mail -s "$ORACLE_UNQNAME Backup Status: FAILED" dba#server.mail< $LOGFILE
exit 1
I don't want to create linux file on each host to call backup global script from my recovery catalog. Is it possible to configure centralized backups execution schedule on all hosts? Can i configure ssh from one host to all database hosts and execute my linux script for backup?
Thanks in advance for your answers.

I highly recommend using Enterprise Manager to run backup jobs. EM integrates nicely with rman catalog and each instance so you can have setup just the rman execute global script command. Everything else is done by EM.
I have the job scheduled to run only on standbys through EM job scheduler without having to change anything during a switchover.
I just have to cascade the jobs. So I have one step checking if the target is standby and if that step is successful, then I am running a backup. If it's not, the next step doesn't run.
In this way, the monitoring is also integrated with the global database monitoring. You don't have to setup error catch inside a shell script at OS level.

Related

How do you check the status of your database backup (in pgadmin 4)?

I started the backup with pgadmin 4. I unfortunately exited out of the process watcher box thingy. Now I'm not sure how to tell if the backup process is complete or still in progress.
If you started backup in plain mode you can check the tail of the file for:
--
-- PostgreSQL database dump complete
--
Otherwise the simples way is checking the server pg_stat_activity for running backup session, as it should disconnect cluster on complete

pgagent - not running jobs - pgpass file is correct - postgresql

I have Pgagent installed on my Debian OS. Along with Postgresql 9.4.
I have checked .pgpass file as this seems to be the most common cause for a job to not run.
host port 5432 database = * username = postgres password = xxxx.
for both local and the remote host. The database I'm trying to set a job for is on a remote host.
I made sure it was enabled. It's just a simple INSERT script that should repeat every 5 minutes.
No errors are being triggered that I can find. Any ideas of what would cause the job not to run at all - even when selecting 'run now'?
Check postgre db, pgAgent Catalog, pga_jobsteplog
IDK about Linux but I had similar problem in windows where the thing won't run and it doesn't raise any notice on the error even after doing RUN NOW. The only error i could find out was that if i click on the job and click on statistics, i could see like shit ton of times it ran and everytime it ran, its status was F.
The reason for this failure is becuase the pgagent couldn't connect to the main database of postgresql.
The services of pgagent isn't running at all (as we can see this information under services in task manager in windows).
Forcing the service to run would create a failure which can be viewed in the event manager in windows.
To solve this issue, first try putting that pgpass.txt file in the environment variable (if not automatically put), if this didn't work, then what I did was to uninstall and delete all possible folders of Postgres, pgagent, and pgadmin, clearing out all temp files, clearing out registry details which have been put by Postgres, pgagent, and pgadmin and also from environment variable. Then reinstall it and it would normally work :)

why my postgres job stop running?

I create a job to clean the database every day at 01:00.
According to statistic run OK from 3 months.
But today i realize the database size was very big so i check the jobs and hasn't run for one month.
Properties say last run was '10/27/2014' and statistics confirm run successful.
Also say next run will be '10/28/2014' but looks like never run and stay frozen since then.
(I'm using dd/mm/yyyy format)
So why stop running?
There is a way to restart the job or should i go and delete and recreate the job?
How can i know a job didn't run?
I guess i can write a code if job isn't successful but what about if never execute?
Windows Server 2008
PostgreSQL 9.3.2, compiled by Visual C++ build 1600, 64-bit
The problem was the pgAgent service wasn't running.
When I Restart the Postgres service:
Postgres service stop
pgAgent stop because is a dependent service
Postgres service start
but pgAgent didn't
Here you can see pgAgent didn't start.

IBM DB2 replication on secondary passive machine

I have two machines. Both have their own disks. One machine runs active DB2 and the secondary machine has DB2 installed but not running (only one license). In case the DB goes down, I need to start the secondary DB2 instance. Databases should come back online, it is not so critical that all latest data has been transferred. The
What is the easiest way to do this? One is to shutdown all databases every night and script a backup routine. Another is HADR, but in this case I'm not sure if HADR requires a spearate license, and if the DB2 instance on the secondary machine must be running (not possible because we only have one license)
You can transfer each archived log each time the file is passed to the archive directory in the primary database.
You transfer that file to the log directory in the second machine.
From time to time, you can perform a "roll forward to end of logs", and that will reduce the time in case of a role switch.
You can also backup the primary and transfer that backup to the other machine. But you have to restore it when switching, and this could be very long.
Probably, you can also install Express-C in the second one (Two DB2s installed), and you roll forward or restore periodically with this edition. In case of switch, you just change the db2 instance or create a symbolic link to the binaries in order to activate the db2 features not included in express-c.

PostgreSQL - using log shipping to incrementally update a remote read-only slave

My company's website uses a PostgreSQL database. In our data center we have a master DB and a few read-only slave DB's, and we use Londiste for continuous replication between them.
I would like to setup another read-only slave DB for reporting purposes, and I'd like this slave to be in a remote location (outside the data center). This slave doesn't need to be 100% up-to-date. If it's up to 24 hours old, that's fine. Also, I'd like to minimize the load I'm putting on the master DB. Since our master DB is busy during the day and idle at night, I figure a good idea (if possible) is to get the reporting slave caught up once each night.
I'm thinking about using log shipping for this, as described on
http://www.postgresql.org/docs/8.4/static/continuous-archiving.html
My plan is:
Setup WAL archiving on the master DB
Produce a full DB snapshot and copy it to the remote location
Restore the DB and get it caught up
Go into steady state where:
DAYTIME -- the DB falls behind but people can query it
NIGHT -- I copy over the day's worth of WAL files and get the DB caught up
Note: the key here is that I only need to copy a full DB snapshot one time. Thereafter I should only have to copy a day's worth of WAL files in order to get the remote slave caught up again.
Since I haven't done log-shipping before I'd like some feedback / advice.
Will this work? Does PostgreSQL support this kind of repeated recovery?
Do you have other suggestions for how to set up a remote semi-fresh read-only slave?
thanks!
--S
Your plan should work.
As Charles says, warm standby is another possible solution. It's supported since 8.2 and has relatively low performance impact on the primary server.
Warm Standby is documented in the Manual: PostgreSQL 8.4 Warm Standby
The short procedure for configuring a
standby server is as follows. For full
details of each step, refer to
previous sections as noted.
Set up primary and standby systems as near identically as possible,
including two identical copies of
PostgreSQL at the same release level.
Set up continuous archiving from the primary to a WAL archive located
in a directory on the standby server.
Ensure that archive_mode,
archive_command and archive_timeout
are set appropriately on the primary
(see Section 24.3.1).
Make a base backup of the primary server (see Section 24.3.2), and load
this data onto the standby.
Begin recovery on the standby server from the local WAL archive,
using a recovery.conf that specifies a
restore_command that waits as
described previously (see Section
24.3.3).
To achieve only nightly syncs, your archive_command should exit with a non-zero exit status during daytime.
Additional Informations:
Postgres Wiki about Warm Standby
Blog Post Warm Standby Setup
9.0's built-in WAL streaming replication is designed to accomplish something that should meet your goals -- a warm or hot standby that can accept read-only queries. Have you considered using it, or are you stuck on 8.4 for now?
(Also, the upcoming 9.1 release is expected to include an updated/rewritten version of pg_basebackup, a tool for creating the initial backup point for a fresh slave.)
Update: PostgreSQL 9.1 will include the ability to pause and resume streaming replication with a simple function call on the slave.