Query to list all the database in DB2 - db2

The general CLP command for listing the databases in DB2
"LIST ACTIVE DATABASES"
what is the sql command to list all the database in a system directory?

It is
list db directory
Details are documented here

Using Python :
In [42]: stmt = ibm_db.exec_immediate(conn, "SELECT DISTINCT(DB_NAME) FROM table(mon_get_memory_pool('','',-2))")
In [43]: while (ibm_db.fetch_row(stmt)):
...: DB_NAME = ibm_db.result(stmt, "DB_NAME")
...: print("DB_NAME = {}".format(DB_NAME))
...:
...:
DB_NAME = SAMPLE
DB_NAME = None

Related

How to convert UTF8 data from PostgreSQL to AL32UTF8 Oracle DB?

I have a task to import some data from Postgres database to Oracle via dblink
The connection between Postgres and Oracle works good, but unfortunately, when I try read data from created view (in Oracle database), I spotted a problem with data encoding - special national characters (Polish).
Source Postgres database have a UTF8 encoding, but Oracle have a AL32UTF8
Postgres:
select server_encoding
-
UTF8
Oracle:
select * from v$nls_parameters where parameter like '%CHARACTERSET';
-
PARAMETER VALUE
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
When I use command "isql -v" (on destination machine with Oracle database) and later "select * from table;", everything works good, but when I use this same select from Oracle database using dblink my data encoding is broken
For example:
from odbc:
isql -v
select * from table;
[ID][Name]
0,Warszawa
1,Kraków
2,Gdańsk
from oracle using dblink:
select * from table#dblink;
[ID][Name]
0,Warszawa
1,KrakĂłw
2,Gdańsk
/etd/odbc.ini:
[ODBC Data Sources]
[Postgres_DB]
Description = Postgres_DB
Driver = /usr/lib64/psqlodbcw.so
DSN = Postgres_DB
Trace = Yes
TraceFile = /tmp/odbc_sql_postgresdb.log
Database = database
Servername = server
UserName = user
Password = secret
Port = 5432
Protocol = 8.4
ReadOnly = Yes
RowVersioning = No
ShowSystemTables = No
ShowOidColumn = No
FakeOidIndex = No
SSLmode = require
Charset = UTF8
$ORACLE_HOME/hs/admin/initPostgres_DB.ora:
HS_FDS_CONNECT_INFO = Postgres_DB
HS_FDS_TRACE_LEVEL=DEBUG
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
HS_FDS_SUPPORT_STATISTICS = FALSE
HS_LANGUAGE=AL32UTF8
set ODBCINI=/etc/odbc.ini
I have installed these packages:
postgresql-libs.x8664 - 8.4.20-8.el69
postgresql-odbc.x8664 - 08.04.0200-1.el6
unixODBC.x8664 - 2.2.14-14.el6
unixODBC-devel.x86_64 - 2.2.14-14.el6
Please help me.. I need to have the correct data in Oracle..
Thank you very much

How can I get Technical Names of my Fields Using COPY Command

I wrote my code, I am getting all the data from my Postgres Database but fields technical names stored in my database are not coming under my xlsx file??
Below is my code:
conn = psycopg2.connect("dbname=cush2 user=tryton50 password=admin host=localhost")
cur = conn.cursor()
sql = "COPY (SELECT * FROM cushing_syndrome) TO STDOUT WITH CSV DELIMITER ','"
with open("/home/cf/Desktop/result_1.xlsx", "w") as file:
cur.copy_expert(sql, file)

Postgresql pg_profile getting error while creating snapshot

I am referring https://github.com/zubkov-andrei/pg_profile for generating awr like report.
Steps which I have followed are as below :
1) Enabled below parameters inside postgresql.conf (located inside D:\Program Files\PostgreSQL\9.6\data)
track_activities = on
track_counts = on
track_io_timing = on
track_functions = on
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = 'top'
pg_stat_statements.save = off
pg_profile.topn = 20
pg_profile.retention = 7
2) Manually copied all the file beginning with pg_profile to D:\Program Files\PostgreSQL\9.6\share\extension
3) From pgAdmin4 console executed below commands successfully
CREATE EXTENSION dblink;
CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION pg_profile;
4) To see which node is already present I executed SELECT * from node_show();
which resulted in
node_name as local
connstr as dbname=postgres port=5432
enabled as true
5) To create a snapshot I executed SELECT * from snapshot('local');
but getting below error
ERROR: could not establish connection
DETAIL: fe_sendauth: no password supplied
CONTEXT: SQL statement "SELECT dblink_connect('node_connection',node_connstr)"
PL/pgSQL function snapshot(integer) line 38 at PERFORM
PL/pgSQL function snapshot(name) line 9 at RETURN
SQL state: 08001
Once I am able to generate multiple snapshot then I guess I should be able to generate report.
just use SELECT * from snapshot()
look at the code of the function. It calls the other one with node as parameter.

How to get data out of a postgres bytea column into a python variable using sqlalchemy?

I am working with the script below.
If I change the script so I avoid the bytea datatype, I can easily copy data from my postgres table into a python variable.
But if the data is in a bytea postgres column, I encounter a strange object called memory which confuses me.
Here is the script which I run against anaconda python 3.5.2:
# bytea.py
import sqlalchemy
# I should create a conn
db_s = 'postgres://dan:dan#127.0.0.1/dan'
conn = sqlalchemy.create_engine(db_s).connect()
sql_s = "drop table if exists dropme"
conn.execute(sql_s)
sql_s = "create table dropme(c1 bytea)"
conn.execute(sql_s)
sql_s = "insert into dropme(c1)values( cast('hello' AS bytea) );"
conn.execute(sql_s)
sql_s = "select c1 from dropme limit 1"
result = conn.execute(sql_s)
print(result)
# <sqlalchemy.engine.result.ResultProxy object at 0x7fcbccdade80>
for row in result:
print(row['c1'])
# <memory at 0x7f4c125a6c48>
How to get the data which is inside of memory at 0x7f4c125a6c48 ?
You can cast it use python bytes()
for row in result:
print(bytes(row['c1']))

How to connect Jupyter Ipython notebook to Amazon redshift

I am using Mac Yosemite.
I have installed the packages postgresql, psycopg2, and simplejson using conda install "package name".
After the installation I have imported these packages. I tried to create a json file with my amazon redshift credentials
{
"user_name": "YOUR USER NAME",
"password": "YOUR PASSWORD",
"host_name": "YOUR HOST NAME",
"port_num": "5439",
"db_name": "YOUR DATABASE NAME"
}
I used with
open("Credentials.json") as fh:
creds = simplejson.loads(fh.read())
But this is throwing error. These were the instructions given on a website. I tried searching other websites but no site gives a good explanation.
Please let me know the ways I can connect the Jupyter to amazon redshift.
There's a nice guide from RJMetrics here: "Setting up Your Analytics Stack with Jupyter Notebook & AWS Redshift". It uses ipython-sql
This works great and displays results in a grid.
In [1]:
import sqlalchemy
import psycopg2
import simplejson
%load_ext sql
%config SqlMagic.displaylimit = 10
In [2]:
with open("./my_db.creds") as fh:
creds = simplejson.loads(fh.read())
connect_to_db = 'postgresql+psycopg2://' + \
creds['user_name'] + ':' + creds['password'] + '#' + \
creds['host_name'] + ':' + creds['port_num'] + '/' + creds['db_name'];
%sql $connect_to_db
In [3]:
% sql SELECT * FROM my_table LIMIT 25;
Here's how I do it:
----INSERT IN CELL 1-----
import psycopg2
redshift_endpoint = "<add your endpoint>"
redshift_user = "<add your user>"
redshift_pass = "<add your password>"
port = <your port>
dbname = "<your db name>"
----INSERT IN CELL 2-----
from sqlalchemy import create_engine
from sqlalchemy import text
engine_string = "postgresql+psycopg2://%s:%s#%s:%d/%s" \
% (redshift_user, redshift_pass, redshift_endpoint, port, dbname)
engine = create_engine(engine_string)
----INSERT IN CELL 3 - THIS EXAMPLE WILL GET ALL TABLES FROM YOUR DATABASE-----
sql = """
select schemaname, tablename from pg_tables order by schemaname, tablename;
"""
----LOAD RESULTS AS TUPLES TO A LIST-----
tables = []
output = engine.execute(sql)
for row in output:
tables.append(row)
tables
--IF YOU'RE USING PANDAS---
raw_data = pd.read_sql_query(text(sql), engine)
The easiest way is to use this extension -
https://github.com/sat28/jupyter-redshift
The sample notebook shows how it loads redshift utility as an IPython Magic.
Edit 1
Support for writing back to redshift database has also been added.