When I am restoring the database, by default data is going in C drive, but when I installed the db2 that time I specify the path in D drive only.
Also, sample database files created by db2 is stored in D drive.
Can anyone please tell me what is the issue?
I have run this command:
SELECT * FROM SYSIBMADM.DBPATHS
below is the result i fetched:
LOGPATH- D:\DB2\NODE000\SQL00001\SQLOGDIR\
DB_STORAGE_PATH- C:\
LOCAL_DB_DIRECTORY - D:\DB2\NODE000\SQLOGDIR\
DBPATH - D:\DB2\NODE000\SQL00001\
I Want to change this DB_STORAGE_PATH C:\ to D:\ for all the database which i will be restoring.
You can run db2set from db2 command line that will confirm you wheather db2 installed on path with other information;
db2-command-line> db2set
DB2_ATS_ENABLE=YES
DB2_CREATE_DB_ON_PATH=YES
DB2INSTPROF=C:\where\db2\installed\IBM\DB2\DB2COPY1
DB2COMM=TCPIP
You can get more information of Directory structure for your installed DB2 database product (Windows) here
You can run the following command SELECT * FROM SYSIBMADM.DBPATHS. This will give details of following variables of your installed db2 database;
LOGPATH
DB_STORAGE_PATH
LOCAL_DB_DIRECTORY
DBPATH
These commands will provide you enough information to locate your installed database. Then you can restore your database providing the exact path.
To add a storage path to an existing database, issue the following ALTER DATABASE statement:
ALTER DATABASE database-name ADD STORAGE ON storage-path
After adding one or more storage paths to the database, you may use the ALTER TABLESPACE statement to rebalance table spaces in the database so that they start to use the new storage paths immediately.
DB2 has a configuration parameter for the default path for databases, dftdbpath. In addition, the command db2sampl to create a sample database has an option dbpath to specify where to place that database.
db2sample -dbpath D:
The above would place the new database on drive D:.
You will find that there are default paths for certain operations. The overview of DB2 database manager configuration parameters has lists most of them.
For your specific issue I would assume that a parameter was changed some time after DB2 was installed and used initially.
For RESTORE be aware that the options TO and DBPATH are ignored if restoring an existing database.
Related
I'm planning to move some tables to different tablespaces (folders) on my PROD Linux box.
Overnight DB backups are done using pg_dumpall
I have also DEV environment working under Windows OS Im usually restoring sql dump (made on Linux).
Im worrying now how to restore such sql dumps, having pointers to Linux partition, in Linux notation.
I read on various webpages that same folder structure has to be created in order to restore non-standard tablespaces. But folder paths in Windows and Linux looks totally different (c:\... vs /opt/...)
Is there any command line switch allowing remap tablespace to other (Windows-like location) during restore? If not how you guys manage that scenario ?
I guess I shoud be able to archieve that by editing this SQL dump file - but it's huge, few hundred gigs file, also it is a bit problematic to automate
You can retrieve the actual tablespace definitions with a separate pg_dumpall command. You still need to do some editing, but the output is not that large. (similar for users)
pg_dumpall --tablespaces-only mydatabasename >stuff.out
There is no option to remap tablespace names during import, so you will need to create them in your Windows installation with the same name - the actual location physical location ("folder structure") is irrelevant as the SQL dump only references them by name.
If the script contains the create tablespace command you need to change that command to use a directory/path name that exists on your system before you can run the SQL script. But you only need to change that, all other places will refer to the tablespace name, not the folder path.
Typically pg_dump is easier than pg_dumpall for moving databases around (e.g. because of tablespaces).
I have a postgre database DATA1 in table space location D:\tbl_DATA1. We use OS backup restore tool copy the the D:\tbl_DATA1 to a target machine C:\tbl_DATA1. Is it possible for recreate the database from this folder in the second mahcine?
https://www.postgresql.org/docs/current/static/backup-file.html
An alternative backup strategy is to directly copy the files that
PostgreSQL uses to store the data in the database
and later two restrictions mentionned
The database server must be shut down in order to get a usable backup.
You should resotore the whole PGDATA direcotory, not the certain individual tables or databases from their respective files or directories.
So yes - it is a common practice to shutdown the PostgreSQL, copy PGDATA directory to other machine and start Postgres in order to get the cluser copy. But it is done cluster level - not tablespace as you mention or database - the whole data_directory should be copied.
So no - copying the tablespace directory and trying to hack the db to add a tablespace will fail.
I have offline backup of IBM DB2 database, Which I am trying to restore using the following command.
It is throwing following error
Is there any problem in the command.
There are three problems, actually. First, in the from clause you must specify the directory enclosing the backup image, not the image itself. The second problem is the spaces in the directory name. Thirdly, it's taken at, not takenat.
I have no way of testing this on Windows, but you may want to try enclosing the from path in double quotes:
db2 restore db exoffice from "d:\db2\offline backup - db2hc" taken at ...
I am using command
db2 restore db S18 from /users/intadm/s18backup/ taken at 20110913113341 on /users/db2inst1/ dbpath on /users/db2inst1/ redirect without rolling forward
to restore database from backup file located in /users/intadm/s18backup/ .
Command execution gives such output:
SQL1277W A redirected restore operation is being performed. Table space
configuration can now be viewed and table spaces that do not use automatic
storage can have their containers reconfigured.
DB20000I The RESTORE DATABASE command completed successfully.
When I'm trying to connect to restored DB (by executing 'db2 connect to S18'), I'm getting this message:
SQL0752N Connecting to a database is not permitted within a logical unit of
work when the CONNECT type 1 setting is in use. SQLSTATE=0A001
When I'm trying to connect to db with db viewer like SQuireL, the error is like:
DB2 SQL Error: SQLCODE=-1119, SQLSTATE=57019, SQLERRMC=S18, DRIVER=3.57.82
which means that 'error occurred during a restore function or a restore is still in progress' (from IBM DB2 manuals)
How can I resolve this and connect to restored database?
UPD: I've executed db2ckbkp on backup file and it did not identified any issues with backup file itself.
without rolling forward can only be used when restoring from an offline backup. Was your backup taken offline? If not, you'll need to use roll forward.
When you do a redirected restore, you are telling DB2 that you want to change the locations of the data files in the database you are restoring.
The first step you show above will execute very quickly.
Normally, after you execute this statement, you would have one or more SET TABLESPACE CONTAINERS to set the new locations of each data file. It's not mandatory to issue these statements, but there's no point in specifying the redirect option in your RESTORE DATABASE command if you're not changing anything.
Then, you would issue the RESTORE DATABASE S18 COMPLETE command, which would actually read the data from the backup image, writing it to the data files.
If you did not execute the RESTORE DATABASE S18 COMPLETE, then your restore process is incomplete and it makes sense that you can't connect to the database.
What I did and what has worked:
Executed:
db2 restore db S18 from /users/intadm/s18backup/ taken at 20110913113341 on /<path with sufficient disk space> dbpath on /<path with sufficient disk space>
I got some warnings before, that some table spaces are not moved. When I specified dbpath to partition with sufficient disk space - warning has disappeared.
After that, as I have an online backup, I issued:
db2 rollforward db S18 to end of logs and complete
That's it! Now I'm able to connect.
Where are log file stored in DB2?
I am searching for a file with name Updatedb20100604182008.log
from this page:
http://www.ibm.com/developerworks/data/library/techarticle/0301kline/0301kline.html
(The article goes into further detail about default locations as well.)
The database logs are initially
created in a directory called
SQLOGDIR, a sub-directory of the
database directory. You can change the
location where active logs and future
archive logs are placed by changing
the value for this configuration
parameter to point to either a
different directory, or to a device.
Archive logs that are currently stored
in the database log path directory are
not moved to the new location if the
database is configured for
roll-forward recovery.
Because you can change the log path
location, the logs needed for
roll-forward recovery may exist in
different directories or on different
devices. You can change this
configuration parameter during the
roll-forward process to allow you to
access logs in multiple locations.
The change to the value of newlogpath
will not be applied until the database
is in a consistent state. An
informational database configuration
parameter, database_consistent,
indicates the status of the database.
Note: The database manager writes to
transaction logs one at a time. The
total size of transactions that can be
active is limited by the database
configuration parameters:
The DB2 log file location can be found from the DB CFG parameter - 'Path to log files'.
The command would be the below, without an explicit connection to the DB.
db2 get db cfg for db_name | grep 'Path to log files'
Else, you can connect to the DB first and use the command as follows:-
db2 connect to db_name
db2 get db cfg | grep 'Path to log files'
db2 terminate
db2 connect to database
db2 get db cfg | grep -i log
cd /data/dblogs/NODE0000(path to the log files)
cd LOGSTREAM0000(these is log folder)
ls -altr(we can see all the log files with .log extension)
rm abc.log (give the log name which you want to delete)