IBM DB2, create an alias to database with schemas and tables - db2

I have a database (list db directory):
Database 4 entry:
Database alias = ABC
Database name = ABC
Local database directory = /data
Database release level = f.00
Comment =
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =
I need to have an alias for that database because my application tries to connect to database DEF.
I can create an alias using
catalog db ABC as DEF
then the (list db directory) shows:
Database 4 entry:
Database alias = ABC
Database name = ABC
Local database directory = /data
Database release level = f.00
Comment =
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =code here
Database 5 entry:
Database alias = DEF
Database name = ABC
Local database directory = /data
Database release level = f.00
Comment =
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =
But after I connect to aliased db using:
db2 connect DEF
I can't access any schemas and tables from original database. Of course when I connect using ABC database name everything is visible and on place.
Am I misunderstanding aliases in DB2? Or maybe there is some option like "create an alias with data" or something like that?

You seem to misunderstand the purpose of a database-alias produced by db2 catalog database ABC as DEF.
For Db2 for Linux/Unix/Windows, a database ALIAS is not a SCHEMA.
You cannot use the new alias in SELECT or other SQL statements.
You can only reference the database-alias in the CONNECT step. After successful connection, use SQL as if you had connected to database ABC only.
The database-alias is only a pointer to a database.
The database being pointed to (in your case ABC) does not change, and the schemas inside it do not change, and you cannot change how you refer to objects like tables and views in those schemas.
In your SELECT (or other SQL statements) you must refer to the schemas that are present inside the physical database. So there really is not schema called DEF, becase DEF is an alias known only to the command-line-processor and db2-database-directory. If you wish to make new synonyms inside the database you are free to do that , but that is NOT the purpose of database-aliases.
As you seem to be running Db2 for Linux/Unix/Windows with local databases (Directory entry type = Indirect), you should connect to each database ABC and DEF and run both of the queries below and then compare their outputs from each database, and update your question with the outputs.
select char(os_name,20) as os_name
, char(os_version,5) as os_version
, char(os_release,20) as os_release
, char(host_name,30) as host_name
from sysibmadm.env_sys_info;
select char(inst_name,15) as inst_name
,char(release_num,20) as release_num
,char(service_level,20) as service_level
,char(bld_level,20) as bld_level
,char(ptf,20) as ptf
from sysibmadm.env_inst_info;

#mao - You are right!
I've been connected using external software to one node, and using shell db2 to other node. That was the reason that i wasn't able to see databases.

Related

using pg_cron extension on Cloud SQL

I am trying to use pg_cron to schedule calls on stored procedure on several DBs in a Postgres Cloud SQL instance.
Unfortunately it looks like pg_cron can only be only created on postgres DB
When I try to use pg_cron on a DB different than postgres I get this message :
CREATE EXTENSION pg_cron;
ERROR: can only create extension in database postgres
Detail: Jobs must be scheduled from the database configured in
cron.database_name, since the pg_cron background worker reads job
descriptions from this database. Hint: Add cron.database_name =
'mydb' in postgresql.conf to use the current database.
Where: PL/pgSQL function inline_code_block line 4 at RAISE
Query = CREATE EXTENSION pg_cron;
... I don't think I have access to postgresql.conf in Cloud SQL ... is there another way ?
Maybe I could use postgres_fdw to achieve my goal ?
Thank you,
There's no need to edit any files. All you have to do is set the cloudsql.enable_pg_cron flag (see guide) and then create the extension in the postgres database.
You need to log onto the postgres database rather than the one you're using for your app. For me that's just replacing the name of my app database with 'postgres' e.g.
psql -U<username> -h<host ip> -p<port> postgres
Then simply run the create extension command and the cron.job table appears. Here's one I did a few minutes ago in our cloudsql database. I'm using the cloudsql proxy to access the remote db:
127.0.0.1:2345 admin#postgres=> create extension pg_cron;
CREATE EXTENSION
Time: 268.376 ms
127.0.0.1:2345 admin#postgres=> select * from cron.job;
jobid | schedule | command | nodename | nodeport | database | username | active | jobname
-------+----------+---------+----------+----------+----------+----------+--------+---------
(0 rows)
Time: 157.447 ms
Be careful to specify the correct target database when setting the schedule otherwise it will think that you want the job to run in the postgres database
.. I don't think I have access to postgresql.conf in Cloud SQL ...
Actually there is, you can use the patch command.
according to pg_cron doc, you need two change two things in the conf file:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'another_table' #optionnaly to change the database where pg_cron background worker expects its metadata tables to be created
Now, according to gcloud
You need to set up two flags on your instance:
gcloud sql instances patch [my_instance] --database-flags=cloudsql.enable_pg_cron=on,cron.database_name=[my_name]
CAREFUL, don't use twice the command "patch" as you would erase your first setting. Put all your changes in one command
You also might want set cron.database_name in postgresql.conf (or flag in CloudSQL)
cron.database_name = mydatabase

Unable to connect to remote DB in db2

Facing a weird issue in DB2. Unable to connect to remote DB.
Catalogued Successfully. But when tried to connect to DB alias getting a error as
"SQL30061N The database alias or database name "NDTEST " was not
found at the remote node."
OS :- Linux
DB2Level :-
DB21085I This instance or install (instance name, where applicable:
"db2inst1") uses "64" bits and DB2 code release "SQL10055" with level
identifier "0606010E".
Informational tokens are "DB2 v10.5.0.5", "s141128", "IP23633", and Fix Pack
"5".
Product is installed at "/path/to/db2".
But we did not mention anything as "NDTEST ".
Database alias = QAZWSXED
Database name = NEWDB(changedName)
Node name = BASENNEW
Database release level = 10.00
Comment =
Directory entry type = Remote
Authentication = SERVER_ENCRYPT
Catalog database partition number = -1
Alternate server hostname =
Alternate server port number =
Node name = BASENNEW
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = hostname
Service name = portNumber
db2 connect to QAZWSXED
SQL30061N The database alias or database name "NDTEST " was not
found at the remote node. SQLSTATE=08004
Error means exactly what is say - there is no NEWDB databsae on BASENNEW node.
The fact that you were able to catalog the database doesn't mean it is actually there. There is no connection attempt during the CATALOG DATABASE command (one is not prompted for password).
E.g. if I would create local TCP/IP loopback for my instance:
$ db2 catalog tcpip node loop remote localhost server 61115
I can with no issues catalog both existing (SAMPLE) and non-existing database (BADDB):
$ db2 catalog database sample as loopsamp at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 catalog database baddb as loopbad at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
I will be able to connect to the first one:
Enter current password for kkuduk:
Database Connection Info
Database server = DB2/LINUXX8664 11.5.0.0
SQL authorization ID = KKUDUK
Local database alias = LOOPSAMP
but connection attempt to non-existing one will fail with SQL30061N
db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
SQL30061N The database alias or database name "BADDB " was not
found at the remote node. SQLSTATE=08004
Please verify the node directory on the remote server by runnnig
$ db2 list db directory
and see if you have an entry for your database which has type Indirect
Directory entry type = Indirect
Edit:
I didn't notice your edit that changed the database name. If error returns stalled database name then indeed db2 terminate is needed to create new CLP client application (db2bp).
E.g. if I would uncatalog incorrect entry and cataloged it again I will get similar error as client will use cached entry pointing to incorrect database name:
$ db2 uncatalog db LOOPBAD
DB20000I The UNCATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 catalog database sample as loopbad at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
SQL30061N The database alias or database name "BADDB " was not
found at the remote node. SQLSTATE=08004
db2 terminate terminates the Db2 CLP client back end and reads correctly new entry from the catalog:
$ db2 terminate
DB20000I The TERMINATE command completed successfully.
$ db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
Database Connection Information
Database server = DB2/LINUXX8664 11.5.0.0
SQL authorization ID = KKUDUK
Local database alias = LOOPBAD
Found the problem - There was one entry in DCS(Database Connection Services). To check DCS details
db2 list dcs directory
above command provided a DCS entry with Target Database Name as NDTest .
Working fine after removing/un cataloguing the DCS entry.

Two instances of an application connected with two schemas of same postgres database is unable to differentiate

I am working on a SaaS solution currently provisioning sonarqube and gerrit applications on kubernetes.
As part of that I want to create a new schema in my postgres database for every new application that I provision. Application is connecting using following connection string, (i.e., instance1, instance2, instance3... and so on)
jdbc:postgresql://localhost/gerrit?user=instance1&password=instance1&currentSchema=instance1
The solution works fine for the first occurrence of provisioning gerrit and sonarqube by creating associated tables in the new schema. However, it fails on second time with another new schema in the same database, these failures are most likely associated with application trying to create the associated tables but it already exists.
I am creating the schema with following sql.
create user instance1 with login password 'instance1';
CREATE SCHEMA instance1 AUTHORIZATION instance1;
ALTER ROLE instance1 SET search_path=instance1;
create user instance2 with login password 'instance2';
CREATE SCHEMA instance2 AUTHORIZATION instance2;
ALTER ROLE instance2 SET search_path=instance2;
I am having difficulty in understanding this behavior, how can two separate applications configured against two different schemas of same database could see each others tables.
In order to reproduce this problem I quickly wrote a python script to connect to two different schemas of same database and create the same table and it works fine.
import psycopg2
import sys
import random
_user = raw_input("user: ")
con = None
try:
con = psycopg2.connect(database='gerrit', user=_user,
password=_user, host='localhost')
cur = con.cursor()
cur.execute('SELECT version()')
ver = cur.fetchone()
print ver
table_name = 'tbl_%d' %(1)#random.randint(1,100))
cur.execute('CREATE TABLE %s (id serial, name varchar(32));' %(table_name))
cur.execute('INSERT INTO %s values (1, \'%s\');' %(table_name, table_name+_user))
con.commit()
cur.execute('SELECT * from %s' %(table_name))
ver = cur.fetchone()
print ver
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
Output as follows
$ python pg_test_connect.py
user: instance1
(1, 'tbl_1instance1')
$ python pg_test_connect.py
user: instance2
(1, 'tbl_1instance2')
Since I am able to verify this workflow from python, is this the limitation from JDBC or the applications(gerrit & sonarqube), Has anyone came across this problem with postgres?
The default search_path is "$user", public. Where $user will be substituted with the value of SESSION_USER, so there is no need to explicitly specify the search_path for the ROLE.
But the caveat is that the user has to have USAGE permission to any schema within the search path. If the "$user" schema does not exist it will be ignored. (https://www.postgresql.org/docs/9.4/static/runtime-config-client.html).
So, after tinkering gerrit source code I figured out that its not honoring the currentSchema option. That more or less explains why it has been behaving this way.

DB2 ODBC Connection String without Database Name

I am trying to connect DB2 Server with ODBC, which is working fine if I specify Database in connection string.
driver = 'IBM DB2 ODBC DRIVER'
server = '10.30.30.114'
port = '50000'
protocol = 'TCPIP'
database = 'SAMPLE'
user = 'administrator'
pass = 'password'
DBI.connect("DBI:ODBC:Driver=#{driver};HostName=#{server};Port=#{port};Protocol=#{protocol};Database=#{database};Uid=#{user};Pwd=#{pass};")
The issue is I will not be knowing the database name in advance at the time of connecting to the server. I want the list of databases on the server and then tables in those databases, how should I approach?
You cannot "connect to a DB2 server" via ODBC; you can only connect to a database, for which you obviously need to specify the database name. You could use the DB2 C/C++ API calls db2DbDirOpenScan and db2DbDirGetNextEntry to list the database directory, but this code will need to be executed on the server itself, otherwise it will attempt to list the database catalog on the client machine.
IF you are connecting to DB2 for i server (formerly DB2 UDB on OS/400) --
Initially connect using hostname, allowing database to default. You can then get a list of databases in the DB2 for i SYSCATALOGS view. Your query might look like this:
SELECT catalog_name, -- database name
catalog_text -- DB description
FROM QSYS2.SYSCATALOGS
WHERE catalog_type='LOCAL' -- local to that host
AND catalog_status='AVAILABLE' -- REMOTE catalogs are 'UNKNOWN' status
You could then connect to that database if . Once connected to the appropriate database, you could query other DB2 for i catalog views such as SYSSCHEMAS and SYSTABLES. ODBC/JDBC Catalog views and ANS/ISO Catalog views would also be available.
Other API's are available outside of an ODBC connection via IBM i Access, if you prefer.

database not listed in DB2

I have some databases on DB2 on an AIX server.
I login as the DB2 instance user id "chandroo" (having the db2profile set automatically when i login) and issue a command as below and get no result.
chandroo#xxxxxxxx::/db2/chandroo> db2 list db directory
chandroo#xxxxxxxx::/db2/chandroo>
However if I invoke the db2 directly from the installation directory I am able to see the entries , and I have no clue as to why it happens.
chandroo#xxxxxxxxx::/opt/IBM/db2/V9.5/bin> ./db2 list db directory
System Database Directory
Number of entries in the directory = 2
Database 1 entry:
Database alias = CHANDB
Database name = CHANDB
Local database directory = /db2/chandroo/db
Database release level = c.00
Comment =
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =
Database 2 entry:
Database alias = CHAN
Database name = CHAN
Local database directory = /db2/chandroo/db
Database release level = c.00
Comment =
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =
chandroo#xxxxxxxxx::/opt/IBM/db2/V9.5/bin>
It sounds like the db2profile script isn't being sourced properly. The environment variables defined in that script need to be set for your current AIX shell process, not a temporary subprocess started by sh, ksh, or bash. This is accomplished by specifying a single dot instead of a program name to run the db2profile script. The difference is subtle, but important.
If that is the problem, running this command will fix the problem by properly initializing your current shell process:
. ~chandroo/sqllib/db2profile
and commands like db2 list db directory will start working.
The next step is to determine what is keeping that from happening in your $HOME/.profile startup script. If you see the call to db2profile using the proper syntax as shown above, there might be a problem with the execution permissions on $HOME/.profile.