I have a table EMPLOYEE.EMPLOYEE inside database HELLO which contains 3 records as listed below:
EMP_NO BIRTH_DATE FIRST_NAME LAST_NAME GENDER HIRE_DATE BANK_ACCOUNT_NUMBER PHONE_NUMBER
------- ---------- ------------------ -------------------- ------ ---------- ------------------- --------------
1. 06/05/1998 A B M 01/02/2019 026201521420 +91X
2. 10/14/1997 C D M 01/07/2019 034212323454 +91Y
3. 05/27/1997 E F F 01/14/2019 92329323123 +91Z
Then I first take an offline backup using the following commands
mkdir offlinebackup
db2 terminate
db2 deactivate database HELLO
db2 backup database HELLO to ~/offlinebackup/
After which I get this output:
Backup successful. The timestamp for this backup image is : 20190128115210
Now I take an online backup using the following commands
db2 update database configuration for HELLO using LOGARCHMETH1 'DISK:/database/config/db2inst1/onlinebackup'
db2 backup database HELLO online to /database/config/db2inst1/onlinebackup compress include logs
After this I get the output as:
Backup successful. The timestamp for this backup image is : 20190128115616
Now I go back to db2 and run CONNECT TO HELLO which connects me to my database. When I check for rows in the EMPLOYEE.EMPLOYEE table, I still get all my 3 rows.
Now I remove the row with EMP_NO 3. This gets succesfully removed. Then I run quit from the db2 terminal
Then I use this command to run the restore from my offline backup:
db2 restore db HELLO from ~/offlinebackup/ replace existing
It says DB20000I The RESTORE DATABASE command completed successfully
Now I try to connect to HELLO, it says SQL1117N A connection to or activation of database "HELLO" cannot be made because of ROLL-FORWARD PENDING. SQLSTATE=57019
To which I run db2 rollforward db HELLO to end of logs and stop
Then I connect to HELLO and try to find out the rows, I get only 2 rows, and not 3 as it was in the backup.
EMP_NO BIRTH_DATE FIRST_NAME LAST_NAME GENDER HIRE_DATE BANK_ACCOUNT_NUMBER PHONE_NUMBER
------- ---------- ------------------ --------------------- ------ ---------- ------------------- --------------
1. 06/05/1998 A B M 01/02/2019 026201521420 +91X
2. 10/14/1997 C D M 01/07/2019 034212323454 +91Y
The third record is not visible, which was present in the backup. Can anyone figure out why I am not able to restore the third record from the backup
The rollforward command that you ran:
db2 rollforward db HELLO to end of logs and stop
replayed all available logs, including the record corresponding to the delete statement.
If you wanted to restore the database to the state right after the backup was taken you could have run
db2 rollforward db HELLO to end of backup and stop
Alternatively, since you are restoring from an offline backup, rollforward is altogether not necessary and you could have used
db2 rollforward db HELLO stop
Alternatively, skip the rollforward completely (for offline backups only, of course):
db2 restore db HELLO from ~/offlinebackup/ replace existing without rolling forward
Related
I am trying to use pg_cron to schedule calls on stored procedure on several DBs in a Postgres Cloud SQL instance.
Unfortunately it looks like pg_cron can only be only created on postgres DB
When I try to use pg_cron on a DB different than postgres I get this message :
CREATE EXTENSION pg_cron;
ERROR: can only create extension in database postgres
Detail: Jobs must be scheduled from the database configured in
cron.database_name, since the pg_cron background worker reads job
descriptions from this database. Hint: Add cron.database_name =
'mydb' in postgresql.conf to use the current database.
Where: PL/pgSQL function inline_code_block line 4 at RAISE
Query = CREATE EXTENSION pg_cron;
... I don't think I have access to postgresql.conf in Cloud SQL ... is there another way ?
Maybe I could use postgres_fdw to achieve my goal ?
Thank you,
There's no need to edit any files. All you have to do is set the cloudsql.enable_pg_cron flag (see guide) and then create the extension in the postgres database.
You need to log onto the postgres database rather than the one you're using for your app. For me that's just replacing the name of my app database with 'postgres' e.g.
psql -U<username> -h<host ip> -p<port> postgres
Then simply run the create extension command and the cron.job table appears. Here's one I did a few minutes ago in our cloudsql database. I'm using the cloudsql proxy to access the remote db:
127.0.0.1:2345 admin#postgres=> create extension pg_cron;
CREATE EXTENSION
Time: 268.376 ms
127.0.0.1:2345 admin#postgres=> select * from cron.job;
jobid | schedule | command | nodename | nodeport | database | username | active | jobname
-------+----------+---------+----------+----------+----------+----------+--------+---------
(0 rows)
Time: 157.447 ms
Be careful to specify the correct target database when setting the schedule otherwise it will think that you want the job to run in the postgres database
.. I don't think I have access to postgresql.conf in Cloud SQL ...
Actually there is, you can use the patch command.
according to pg_cron doc, you need two change two things in the conf file:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'another_table' #optionnaly to change the database where pg_cron background worker expects its metadata tables to be created
Now, according to gcloud
You need to set up two flags on your instance:
gcloud sql instances patch [my_instance] --database-flags=cloudsql.enable_pg_cron=on,cron.database_name=[my_name]
CAREFUL, don't use twice the command "patch" as you would erase your first setting. Put all your changes in one command
You also might want set cron.database_name in postgresql.conf (or flag in CloudSQL)
cron.database_name = mydatabase
Facing a weird issue in DB2. Unable to connect to remote DB.
Catalogued Successfully. But when tried to connect to DB alias getting a error as
"SQL30061N The database alias or database name "NDTEST " was not
found at the remote node."
OS :- Linux
DB2Level :-
DB21085I This instance or install (instance name, where applicable:
"db2inst1") uses "64" bits and DB2 code release "SQL10055" with level
identifier "0606010E".
Informational tokens are "DB2 v10.5.0.5", "s141128", "IP23633", and Fix Pack
"5".
Product is installed at "/path/to/db2".
But we did not mention anything as "NDTEST ".
Database alias = QAZWSXED
Database name = NEWDB(changedName)
Node name = BASENNEW
Database release level = 10.00
Comment =
Directory entry type = Remote
Authentication = SERVER_ENCRYPT
Catalog database partition number = -1
Alternate server hostname =
Alternate server port number =
Node name = BASENNEW
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = hostname
Service name = portNumber
db2 connect to QAZWSXED
SQL30061N The database alias or database name "NDTEST " was not
found at the remote node. SQLSTATE=08004
Error means exactly what is say - there is no NEWDB databsae on BASENNEW node.
The fact that you were able to catalog the database doesn't mean it is actually there. There is no connection attempt during the CATALOG DATABASE command (one is not prompted for password).
E.g. if I would create local TCP/IP loopback for my instance:
$ db2 catalog tcpip node loop remote localhost server 61115
I can with no issues catalog both existing (SAMPLE) and non-existing database (BADDB):
$ db2 catalog database sample as loopsamp at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 catalog database baddb as loopbad at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
I will be able to connect to the first one:
Enter current password for kkuduk:
Database Connection Info
Database server = DB2/LINUXX8664 11.5.0.0
SQL authorization ID = KKUDUK
Local database alias = LOOPSAMP
but connection attempt to non-existing one will fail with SQL30061N
db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
SQL30061N The database alias or database name "BADDB " was not
found at the remote node. SQLSTATE=08004
Please verify the node directory on the remote server by runnnig
$ db2 list db directory
and see if you have an entry for your database which has type Indirect
Directory entry type = Indirect
Edit:
I didn't notice your edit that changed the database name. If error returns stalled database name then indeed db2 terminate is needed to create new CLP client application (db2bp).
E.g. if I would uncatalog incorrect entry and cataloged it again I will get similar error as client will use cached entry pointing to incorrect database name:
$ db2 uncatalog db LOOPBAD
DB20000I The UNCATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 catalog database sample as loopbad at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
SQL30061N The database alias or database name "BADDB " was not
found at the remote node. SQLSTATE=08004
db2 terminate terminates the Db2 CLP client back end and reads correctly new entry from the catalog:
$ db2 terminate
DB20000I The TERMINATE command completed successfully.
$ db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
Database Connection Information
Database server = DB2/LINUXX8664 11.5.0.0
SQL authorization ID = KKUDUK
Local database alias = LOOPBAD
Found the problem - There was one entry in DCS(Database Connection Services). To check DCS details
db2 list dcs directory
above command provided a DCS entry with Target Database Name as NDTest .
Working fine after removing/un cataloguing the DCS entry.
I am trying to import a JOBFILE.CSV from my hard drive into the table JOB using the RUN SQL script in IBM DB2 cloud.
CALL SYSPROC.ADMIN_CMD('IMPORT FROM "C:/DATAFILE/JOBFILE.CSV"
OF DEL INSERT INTO JOB');
I am getting this error:
An I/O error (reason = "sqlofopn -2029060079") occurred while opening
the input file.. SQLCODE=-3030, SQLSTATE= , DRIVER=4.25.1301
It seems the path that I have set is not working. As I have researched JOBFILE.CSV must be loaded first into the DB2 server before the IMPORT script could run.
With a file located on a local client there are two options "basic" options (excluding e.g. Db2 Cloud REST API to import the data)
LOAD with a CLIENT keyword (work also for all Db2 LUW on-premise releases)
Insert from an EXTERNAL TABLE (available in Db2 Cloud, Warehouse and 11.5 release)
The latter is typically the fastest. See an example with an input like this:
db2 "create table import_test(c1 int, c2 varchar(10))"
echo "1,'test1'" > data.del
echo "2,'test2'" >> data.del
To insert the data from the client we need we can run:
db2 "INSERT INTO import_test SELECT * FROM EXTERNAL '/home/db2v111/data.del' USING (DELIMITER ',' REMOTESOURCE YES)"
DB20000I The SQL command completed successfully.
db2 "select * from import_test"
C1 C2
----------- ----------
2 'test2'
1 'test1'
2 record(s) selected.
More examples including importing data from S3 can be found in Loading data to IBM Cloud chapter of the documentation.
I am trying to do a offline backup for my DB2(10.1.0) using script and schedule it.
db2backup.bat
#ECHO OFF
FOR xxxx IN (OPNACT BLOGS SNCOMM DOGEAR FILES FORUM HOMEPAGE PEOPLEDB WIKIS) DO (
DB2 CONNECT TO xxxx
DB2 QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS
DB2 CONNECT RESET
DB2 BACKUP DATABASE xxxx TO "C:\Backup\DB2" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 3 WITHOUT PROMPTING
DB2 CONNECT TO xxxx
DB2 UNQUIESCE DATABASE
DB2 CONNECT RESET
)
But when i try to run it,
DB2CMD /c /w /i C:\Backup\db2backup.bat
I am getting a error ,
"xxxx was unexpected at this time."
so why i am getting this?how can i avoid it ?
Many Thanks for your input !!.
The problem is the how you deal with the variable. xxxxx is not a valid name, it should be preceded by two percentages: %%xxxxx
#ECHO OFF
FOR %%xxxx IN (OPNACT BLOGS SNCOMM DOGEAR FILES FORUM HOMEPAGE PEOPLEDB WIKIS) DO (
DB2 CONNECT TO %%xxxx
DB2 QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS
DB2 CONNECT RESET
DB2 BACKUP DATABASE %%xxxx TO "C:\Backup\DB2" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 3 WITHOUT PROMPTING
DB2 CONNECT TO %%xxxx
DB2 UNQUIESCE DATABASE
DB2 CONNECT RESET
)
For more information you can check this question What is the difference between % and %% in a cmd file?
I am using pg_trgm extension for fuzzy search. The default threshold is 0.3 as show in:
# select show_limit();
show_limit
------------
0.3
(1 row)
I can change it with:
# select set_limit(0.1);
set_limit
-----------
0.1
(1 row)
# select show_limit();
show_limit
------------
0.1
(1 row)
But when I restart my session, the threshold is reset to default value:
# \q
$ psql -Upostgres my_db
psql (9.3.5)
Type "help" for help.
# select show_limit();
show_limit
------------
0.3
(1 row)
I want to execute set_limit(0.1) every time I start postgresql. Or in other words, I want to set 0.1 as default value for threshold of pg_trgm extension. How do I do that?
This has been asked before:
Set default limit for pg_trgm
The initial setting is hard coded in the source. One could hack the source and recompile the extension.
To address your comment:
You could put a command in your psqlrc or ~/.psqlrc file. A plain and simple SELECT command in a separate line:
SELECT set_limit(0.1)
Be aware that the additional module is installed per database, while psql can connect to any database cluster and any database within that cluster. It will cause an error message when connecting to any database where pg_trgm is not installed. Nothing bad will happen, though.
On the other hand, connecting with any other client will not set the limit, this may be a bit of a trap.
pg_trgm should really provide a config setting ...