DB2 Command Line Processor times out - db2

Freshly installed DB2 10.1 on 64 bit Ubuntu 12.04 virtual machine.
I'm getting DB21015E error when I execute commands like:
sudo ./DB2 update DBM CFG using SYSADM_GROUP db2iadm1
or even
sudo ./DB2 get DMB CFG
Tried increasing the DB2BQTIME parameter but I don't think it has to do anything with that.
The online help also suggests that 'db2bp' must reside in the correct folder, with execution rights. It sits in /home/db2inst1/sqllib/bin with -r-xr-xr-x.
What else could go wrong? Is there a log file that I can check?
UPDATE
Run strace and it's telling me that /tls/x86_64/libm.so.6 is missing.

who is your db2 user? db2inst1? then you should execute the command as db2inst1
sudo -s
su - db2inst1
db2 update DBM cfg using SYSADM_GROUP db2iadm1
note, no capitals in DB2

It is not necessary to execute these command with root (sudo). Most of the time, root does not have any right on a instance or database in db2. As Paul has said, you should change the session to the instance user, I suppose that in your case is db2inst1.
If your problem persist, the best is to drop and recreate the instance. This will not drop your current database.
sudo su -
cd /opt/ibm/db2/V10.1/instance
./db2idrop db2inst1
./db2icrt -u db2inst1 db2inst1
The users could change for your security schema (fenced user, instance user, DB2 path, etc.)

Related

Creating a copy of the database in PostgreSQL

I'm trying to create the first copy of my database. I'm using PostgreSQL and Ubuntu 16+ with Django technology.
I found this documentation to create a copy:
I'm trying to export the entire database to a file so that I can add it to another server. I tried this:
pg_dump app_prod > test_copy
pg_dump --host=localhost --username=app --dbname=app_prod --file=testdb.sql
after selecting ls my directory can see the database. But by running eg WinSCP it is not visible.
How can I take these files, copy them to my Windows system and upload to another Ubuntu server?
I think that it is enough to make them visible in WinSCP. How can I do this?
EDIT:
drwxr-xr-x 3 postgres postgres 4096 Oct 4 08:06 9.5
-rw-rw-r-- 1 postgres postgres 3578964 Jan 18 10:46 test_copy
-rw-rw-r-- 1 postgres postgres 0 Jan 18 10:54 testdb.sql
It seems like this was resolved in the comments: you were looking at the wrong folder in the WinSCP folder explorer.
There are a few items worth noting to bolster the good advice already given:
Your ls -l output indicates that the SQL file is zero bytes in size, so something has gone wrong there. If you manage to transfer it to your local machine, you will find it is empty.
Also, try not to store database dumps in /var/lib/postgresql - this is where your PostgreSQL database keeps live database files on the server, and you don't want to risk changing or deleting anything here. Use /home/maddie instead (change the username as appropriate).

How to start postgres service as a different user(myown) instead postgres user

I have installed postgres 9.5 in my linux(16.04) machine.I have started service using below command.
sudo service postgresql start
This will start postgres service as a postgres user.
But I want to run postgres a different user(myown user).
How can I do .Please help !!.
You have to recursively change the ownership of the database directory to the new user.
If the WAL directory or tablespaces are outside the data directory, you have to change their ownership too.
Then you will have to configure the startup script so that it starts PostgreSQL as the new user. Watch out: if you installed the startup script with an installation package, any changes to it will probably be lost after an update.
I recommend that you don't do all that and continue running PostgreSQL as postgres.

Firebird 3 on macOS, local connection fails with: Can not access lock files directory /tmp/firebird/

I've installed firebird 3.0 from the package provided by firebirdsql.org.
If I try to use a local connection to a database:
isql employee -user SYSDBA
it fails with:
Can not access lock files directory /tmp/firebird/
So adding read/write/execute permissions to /tmp/firebird/
sudo chmod a+rwx /tmp/firebird/
and executing the command again yields:
Statement failed, SQLSTATE = 08001
I/O error during "open" operation for file "/tmp/firebird/fb_init"
-Error while trying to open file
-Unknown error: -1
This all will work if I sudo the calls, but is this really necessary?
What is the correct way to use a local connection to firebird database on macOS?
I found CORE-3871 issue in the firebird issue tracker, which describes the problem and it's solution. The user which tries to open the local connection must be member of the firebird user group.
So a user is added to the firebird group on mac bash with the following command:
sudo dseditgroup -o edit -a myusername -t user firebird
If you try to open the sample database employee, shipped with firebird, it's also necessary to grant the group write access to the employee.fdb:
sudo chmod g+w /Library/Frameworks/Firebird.framework/Resources/examples/empbuild/employee.fdb
Now /Library/Frameworks/Firebird.framework/Resources/bin/isql employee -user SYSDBA should work
I only put -p and the password and it's just fine. It's working.
You current command creates the Firebird Embedded database engine to connect to the database. To be able to do that, your current OS user needs to have sufficient access to the database file. For details how to fix that, see the answer by jonjonas68.
An alternative to solution - if you have the Firebird server running - is to connect through the Firebird server process, for example using isql localhost:employee -user sysdba -password <sysdbapassword>. Then the file permissions of the user running the Firebird server process will be applied. However, in that situation, you will need to specify a password when connecting, as passwordless authentication is only applied for Firebird Embedded connections.

Moving a firebird 2.5 database

So currently I have firebird 2.5 installed and running on Windows, working fine but performance is a bit slow.
I have installed 2.5 on Ubuntu, and I can connect to the current database with ISQL easily:
connect "192.168.155.112:C:\database\database.FDB" user 'SYSDBA' password 'adminpassword';
So I stopped the firebird services on the Windows server, copied the file to the Ubuntu server, and in isql tried to run:
SQL> connect "localhost:/var/lib/firebird/2.5/data/database.FDB" user 'SYSDBA' password 'adminpassword';
Statement failed, SQLSTATE = m
file /var/lib/firebird/2.5/data/database.FDB is not a valid database
Note I have so far tried:
~$ sudo adduser `id -un` firebird
[sudo] password for luke:
The user `luke' is already a member of `firebird'.
As well as
# chown firebird /var/lib/firebird/2.5/data/database.fdb
With no luck, if anyone has any idea as to why I might be getting this error, I would be very grateful :)
I am not sure if Super or Classic was used on Windows, however I have tried using both on Ubuntu with the same error message. Windows server version 2.5.6, same version on Linux
You need to backup the database using gbak, and then restore it using gbak.
To backup:
gbak -backup employee D:\backups\employee.fbk
To restore:
gbak -c /backups/employee.fbk employee
Where employee is either the path or the alias of the database.
See also the gbak manual for more information.

Remote trigger a postgres database backup

I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.