Server name, Instance Name, DB name in DB2 via sql query - db2

How to find out Server name, Instance Name, DB name in DB2 via sql query.
I was found it via shell scripts. Need via sql scripts.

A Db2 for LUW instance may have multiple database partitions / members which may reside on different hosts.
SELECT
E.INST_NAME
, I.ID AS MEMBER
, I.HOME_HOST AS HOST
, CURRENT SERVER AS DBNAME
FROM
TABLE (DB2_GET_INSTANCE_INFO (NULL, NULL, NULL, NULL, NULL)) I
, SYSIBMADM.ENV_INST_INFO E
--WHERE I.ID = CURRENT MEMBER
;

Related

GCLOUD Postgres, using foreign data wrapper extesion results permission denied for relation

I'm really stuck with the following problem.
At GCloud SQL I have a running postgres' instance.
That instance contains two databases. From one database (source_db) I want to access to another database's (another_db) table (foreign_table) using postgres_fdw extension. The recipe I'm employing currently is this:
1)
CREATE EXTENSION postgres_fdw;
CREATE SERVER foreign_db
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (dbname 'another_db', port '5432', host '<A_PRIVATE_IP>');
CREATE USER MAPPING for guest
SERVER foreign_db
OPTIONS (user 'guest', password 's3cr3t');
CREATE FOREIGN TABLE foreign_table
(
// columns descripions
)
SERVER foreign_db OPTIONS (table_name 'foreign_table');
-- Alternatively I also tried with
CREATE SCHEMA external;
IMPORT FOREIGN SCHEMA public from SERVER foreign_db into external;
GRANT SELECT ON TABLE foreign_table TO guest;
The above commands runs without error, but when I tried to actually access the table I got this:
If using "external" schema
source_db=> select 1 from external.foreign_table limit 1;
ERROR: permission denied for relation foreign_table
CONTEXT: Remote SQL command: SELECT NULL FROM public.foreign_table (*)
If not using "external" schema
source_db=> select 1 from foreign_table limit 1;
ERROR: permission denied for relation foreign_table
CONTEXT: Remote SQL command: SELECT NULL FROM public.foreign_table
The only thing that smells a little is that the error message (at *) displays "public.foreign_table" instead of "external.foreign_table" even when I'm using external schema... but i don't know is that actually means something :S
As far I researched there is no way to login into the posgres instance as a superuser as that is not allowed by the Gcloud's SQL services neither a way to edit the pg_hba.conf file in order to adjust client's authentication affairs.
I searched in a lot of places but without finding what i can do to sort this out. Among the sites and pages i looked are the below list
The official documentation
A personal blog's post
This other SO post having a related issue
This post and this other post regarding permissions and authorizations.
A Nice tutorial about authentication and authorization
P.S.
I was able to make this on a postgres' instance that i ran locally.
User guest on the remote server doesn't have permissions to SELECT from the table. Since the query on the remote server is executed as user guest, you get an error.
GRANT the SELECT privilege on the table on the remote server to the user.

Connection to an RDS/Postgres foreign table is not working from outside AWS

Consider a foreign data server on another database on the same host:
As postgres:
CREATE SERVER keys
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host '3.128.X.Y', port '12345', dbname 'keys');
>> CREATE SERVER
Here is the foreign table:
create foreign table pkey (
uuid varchar not null,
pkey varchar not null,
provisioned boolean not null default false,
created timestamp default current_timestamp not null,
modified timestamp default current_timestamp not null
) server keys options (schema_name 'public', table_name 'pkey');
>> CREATE FOREIGN TABLE
And the user mapping:
CREATE USER MAPPING FOR clip
SERVER pkey
OPTIONS (user 'pubkey', password 'pubkey');
>> CREATE USER MAPPING
Permissions are in place:
grant all on pkey to clip;
>> GRANT
Now let's try to use the foreign table: as 'clip' user:
select count(1) from pkey;
>> ERROR: could not connect to server "pkey"
>> DETAIL: could not connect to server: Connection timed out
Is the server running on host "3.128.X.Y" and accepting
TCP/IP connections on port 12345?
Note: this connection does work when run from another ec2 instance but does not work from my laptop.
The Security Groups for the RDS instance include individually whitelisted entries for the ec2 instance
Security group
Type Rule
default (sg-058f283ad029bf244) CIDR/IP - Inbound 73.63.Y.Z/32 # Laptop
default (sg-058f283ad029bf244) CIDR/IP - Inbound 3.128.A.B/32 # EC2 instance
The connection from my laptop to the RDS works to the clip DB
The connection from my laptop to the RDS does not work to access the pkey foreign table defined in the clip DB
The connection from the other ec2 instance to RDS does work for all - including to access the pkey foreign table defined in the clip DB

TABLES DB2 longDescription

Good afternoon.
Can you help me, I would like to know, how the longDescription table in IBMDB2 works and how do I bring the result which is inside the columns?
Thank you.
I believe that the longDescription is one of Maximo Asset Management tables.
If so, here is an example, how to handle it in Db2 on AIX/Linux.
logon as the instance owner on the machine, such as db2inst1
Run below at command prompt to list up all databases name(s) under the instance:
$ db2 list db directory
Then run below to use one of database name(s):
$ db2 connect to DBNAME
please replace DBNAME as your target database name, such as sample.
ie. db2 connect to sample
Then run below to list up all table names in the database DBNAME
$ db2 list tables
it will list all table names and its schema name, type and creation time
Then run below to list up all columns in the table.
$ db2 describe table SCHEMA.TABLENAME
replace SCHEMA like db2inst1 and TABLENAME like longDescription
ie. db2 describe table db2inst1.longDescription
it will list up all column names and some other information
Then run below to get data from a column, such as idownertable.
$ db2 select COLOMN_NAME from SCHEMA.TABLENAME
replace COLOMN_NAME like ldownertable
ie. db2 select ldownertable from db2inst1.longDescription
If we want to see all data in the table, run below:
$ db2 select * from SCHEMA.TABLENAME
ie. db2 select * from db2inst1.longDescription
Hope this helps.

How to add a remote Postgresql db(linked server) to a Postgresql db?

I have a postgresql db at home and one on the cloud. I'd like to add my home db to the cloud db so I can query easily between databases. How can this be done? Without using dblink http://www.postgresonline.com/journal/archives/44-Using-DbLink-to-access-other-PostgreSQL-Databases-and-Servers.html
My home db will use a dynamic ip provider (can I add a dynamic ip address such as myhomedb.dedyn.io to postgresql settings?)
I'm stating all this in case there are any issues. My home db will only be used to update massive amount of data but isn't mission critical (as we know cloud computing isn't cheap).
Thanks in advance.
Looks like postgres-fdw is the way to go: https://www.postgresql.org/docs/current/postgres-fdw.html
First install the extension:
CREATE EXTENSION postgres_fdw;
Then create a foreign server using CREATE SERVER. In this example we
wish to connect to a PostgreSQL server on host 192.83.123.89
listening on port 5432. The database to which the connection is made
is named foreign_db on the remote server:
CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host '192.83.123.89', port '5432', dbname 'foreign_db');
A user mapping, defined with CREATE USER MAPPING, is needed as well
to identify the role that will be used on the remote server:
CREATE USER MAPPING FOR local_user
SERVER foreign_server
OPTIONS (user 'foreign_user', password 'password');
Now it is possible to create a foreign table with CREATE FOREIGN
TABLE. In this example we wish to access the table named
some_schema.some_table on the remote server. The local name for it
will be foreign_table:
CREATE FOREIGN TABLE foreign_table (
id integer NOT NULL,
data text
)
SERVER foreign_server
OPTIONS (schema_name 'some_schema', table_name 'some_table');
It's essential that the data types and other properties of the columns
declared in CREATE FOREIGN TABLE match the actual remote table.
Column names must match as well, unless you attach column_name options
to the individual columns to show how they are named in the remote
table. In many cases, use of IMPORT FOREIGN SCHEMA is preferable to
constructing foreign table definitions manually.

Can't access data from pg_index using Foreign Data Wrapper in postgreSQL 9.3

I implement Foreign data wrapper in postgreSQL 9.3 in another postgreSQL DATABASE as below:
CREATE SERVER app_db
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (dbname 'postgres', host 'localhost');
CREATE USER MAPPING for postgres
SERVER app_db
OPTIONS (user 'postgres', password 'XXXX');
CREATE FOREIGN TABLE location_local
(
id integer,
name character varying
)
SERVER app_db OPTIONS (table_name 'location')
SELECT * FROM location_local;
This all works fine as location table is in public schema. but I also want to access data from pg_catalog. When I follow the same procedure to access the data than it gives me error.
ERROR: relation "public.pg_catalog.pg_index" does not exist
is there any way to access data from catalog using FDW or any other way?
You could try the schema_name option in the foreign table definition:
CREATE FOREIGN TABLE foreign_index (
-- ...
)
SERVER app_db
OPTIONS (
schema_name 'pg_catalog',
table_name 'pg_index'
);
But that might won't work, because pg_catalog is not really a schema, but a system-catalog. If that is the case, you can still use the dblink module to run queries at a foreign database.