Connecting to PostgreSQL in wsgi file - postgresql

I ran into a 500 server error when trying to connect to PostgreSQL from wsgi file using psycopg2.
import psycopg2
def application(environ, start_response):
try:
conn = psycopg2.connect("database = testdb, user = postgres, password = secret")
execept:
print "I am unable to connect to the database"
status = '200 OK'
output = 'Hello Udacity, Robert!'
response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]

After the creation of a database ('udacity' in my case), a table ('hello' in my case) and the population ('Hello world' in my case):
# psql environment
CREATE DATABASE udacity;
CREATE TABLE hello(
word text);
INSERT INTO hello VALUES ('Hello world');
You have to install python dependencies:
# shell environment
sudo apt install python-psycopg2 libpq-dev
sudo apt install python-pip
pip install psycopg2
Then, the next myapp.wsgi script works for me:
import psycopg2
def application(environ, start_response):
status = '200 OK'
output = 'Original message'
response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))]
start_response(status, response_headers)
# DB connection
try:
connection = psycopg2.connect("dbname='udacity' user='ubuntu' host='localhost' password='udacity'")
cursor = connection.cursor()
cursor.execute("""SELECT * from hello""")
rows = cursor.fetchall()
for row in rows:
output = row[0]
except:
output = 'E: Python script'
return [output]

Related

How to convert UTF8 data from PostgreSQL to AL32UTF8 Oracle DB?

I have a task to import some data from Postgres database to Oracle via dblink
The connection between Postgres and Oracle works good, but unfortunately, when I try read data from created view (in Oracle database), I spotted a problem with data encoding - special national characters (Polish).
Source Postgres database have a UTF8 encoding, but Oracle have a AL32UTF8
Postgres:
select server_encoding
-
UTF8
Oracle:
select * from v$nls_parameters where parameter like '%CHARACTERSET';
-
PARAMETER VALUE
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
When I use command "isql -v" (on destination machine with Oracle database) and later "select * from table;", everything works good, but when I use this same select from Oracle database using dblink my data encoding is broken
For example:
from odbc:
isql -v
select * from table;
[ID][Name]
0,Warszawa
1,Kraków
2,Gdańsk
from oracle using dblink:
select * from table#dblink;
[ID][Name]
0,Warszawa
1,KrakĂłw
2,Gdańsk
/etd/odbc.ini:
[ODBC Data Sources]
[Postgres_DB]
Description = Postgres_DB
Driver = /usr/lib64/psqlodbcw.so
DSN = Postgres_DB
Trace = Yes
TraceFile = /tmp/odbc_sql_postgresdb.log
Database = database
Servername = server
UserName = user
Password = secret
Port = 5432
Protocol = 8.4
ReadOnly = Yes
RowVersioning = No
ShowSystemTables = No
ShowOidColumn = No
FakeOidIndex = No
SSLmode = require
Charset = UTF8
$ORACLE_HOME/hs/admin/initPostgres_DB.ora:
HS_FDS_CONNECT_INFO = Postgres_DB
HS_FDS_TRACE_LEVEL=DEBUG
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
HS_FDS_SUPPORT_STATISTICS = FALSE
HS_LANGUAGE=AL32UTF8
set ODBCINI=/etc/odbc.ini
I have installed these packages:
postgresql-libs.x8664 - 8.4.20-8.el69
postgresql-odbc.x8664 - 08.04.0200-1.el6
unixODBC.x8664 - 2.2.14-14.el6
unixODBC-devel.x86_64 - 2.2.14-14.el6
Please help me.. I need to have the correct data in Oracle..
Thank you very much

I am unable to COPY my CSV to postgres using psycopg2/copy_expert

edit:
In postgresql.conf, the log_statement is set to:
#log_statement = 'none' # none, ddl, mod, all
My objective is to COPY a .cvs file containing ~300k records to Postgres.
I am running the script below and nothing happens; no error or warning but still the csv is not uploaded.
Any thoughts?
import psycopg2
# Try to connect
try:
conn = psycopg2.connect(database="<db>", user="<user>", password="<pwd>", host="<host>", port="<port>")
print("Database Connected....")
except:
print("Unable to Connect....")
cur = conn.cursor()
try:
sqlstr = "COPY \"HISTORICALS\".\"HISTORICAL_DAILY_MASTER\" FROM STDIN DELIMITER ',' CSV"
with open('/Users/kevin/Dropbox/Stonks/HISTORICALS/dump.csv') as f:
cur.copy_expert(sqlstr, f)
conn.commit()
print("COPY pass")
except:
print("Unable to COPY...")
# Close communication with the database
cur.close()
conn.close()
This is what my .csv looks like
Thanks!
Kevin
I suggest you to load in first time your df with pandas
import pandas as pd
import psycopg2
conn = psycopg2.connect(database="<db>", user="<user>", password="<pwd>", host="<host>", port="<port>")
cur = conn.cursor()
df = pd.read_csv('data.csv')
cur.copy_from(df, schema , null='', sep=',/;', columns=(df.columns))
For the part columns=(df.columns) I forgot if they want turple or list but should work with a conversion and you should read this
Pandas dataframe to PostgreSQL table using psycopg2 without SQLAlchemy? who could help you

psycopg2.connect() fails when connecting with database

I am trying to access a database via postgresql2 with my jupyter notebook but I receive the following error message.
OperationalError: could not create SSL context: no such file
import pandas as pd
import psycopg2
#Connect to postgres
conn_string = "host='xx' sslmode='require' \
dbname='dbname' port='xx' user='xx' \
password='xx'"
#Create rework dataset
conn = psycopg2.connect(conn_string)
postgreSQL_select_Query = u'SELECT * FROM "xx"."yy"'
conn.set_client_encoding('UNICODE')
cursor = conn.cursor()
cursor.execute(postgreSQL_select_Query)
colnames = [desc[0] for desc in cursor.description]
df_imp = cursor.fetchall()
df = pd.DataFrame(data=df_imp, columns=colnames)
Expected result is the access to the database and generation of dataframe.
Actual result is OperationalError: could not create SSL context: no such file by step conn = psycopg2.connect(conn_string)
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
<ipython-input-2-932b2fb01c9f> in <module>
5
6 #Create rework dataset
----> 7 conn = psycopg2.connect(conn_string)
8 postgreSQL_select_Query = u'SELECT * FROM "xx"."xx"'
9 conn.set_client_encoding('UNICODE')
~\AppData\Local\Continuum\anaconda3\lib\site-packages\psycopg2\__init__.py in connect(dsn, connection_factory, cursor_factory, **kwargs)
128
129 dsn = _ext.make_dsn(dsn, **kwargs)
--> 130 conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
131 if cursor_factory is not None:
132 conn.cursor_factory = cursor_factory
OperationalError: could not create SSL context: No such process
After trying several solutions, the problem was the version of psycopg2 library.
conda update does not install the latest version of the library. However, pip does it and then my code works again!

How to get data out of a postgres bytea column into a python variable using sqlalchemy?

I am working with the script below.
If I change the script so I avoid the bytea datatype, I can easily copy data from my postgres table into a python variable.
But if the data is in a bytea postgres column, I encounter a strange object called memory which confuses me.
Here is the script which I run against anaconda python 3.5.2:
# bytea.py
import sqlalchemy
# I should create a conn
db_s = 'postgres://dan:dan#127.0.0.1/dan'
conn = sqlalchemy.create_engine(db_s).connect()
sql_s = "drop table if exists dropme"
conn.execute(sql_s)
sql_s = "create table dropme(c1 bytea)"
conn.execute(sql_s)
sql_s = "insert into dropme(c1)values( cast('hello' AS bytea) );"
conn.execute(sql_s)
sql_s = "select c1 from dropme limit 1"
result = conn.execute(sql_s)
print(result)
# <sqlalchemy.engine.result.ResultProxy object at 0x7fcbccdade80>
for row in result:
print(row['c1'])
# <memory at 0x7f4c125a6c48>
How to get the data which is inside of memory at 0x7f4c125a6c48 ?
You can cast it use python bytes()
for row in result:
print(bytes(row['c1']))

How to connect Jupyter Ipython notebook to Amazon redshift

I am using Mac Yosemite.
I have installed the packages postgresql, psycopg2, and simplejson using conda install "package name".
After the installation I have imported these packages. I tried to create a json file with my amazon redshift credentials
{
"user_name": "YOUR USER NAME",
"password": "YOUR PASSWORD",
"host_name": "YOUR HOST NAME",
"port_num": "5439",
"db_name": "YOUR DATABASE NAME"
}
I used with
open("Credentials.json") as fh:
creds = simplejson.loads(fh.read())
But this is throwing error. These were the instructions given on a website. I tried searching other websites but no site gives a good explanation.
Please let me know the ways I can connect the Jupyter to amazon redshift.
There's a nice guide from RJMetrics here: "Setting up Your Analytics Stack with Jupyter Notebook & AWS Redshift". It uses ipython-sql
This works great and displays results in a grid.
In [1]:
import sqlalchemy
import psycopg2
import simplejson
%load_ext sql
%config SqlMagic.displaylimit = 10
In [2]:
with open("./my_db.creds") as fh:
creds = simplejson.loads(fh.read())
connect_to_db = 'postgresql+psycopg2://' + \
creds['user_name'] + ':' + creds['password'] + '#' + \
creds['host_name'] + ':' + creds['port_num'] + '/' + creds['db_name'];
%sql $connect_to_db
In [3]:
% sql SELECT * FROM my_table LIMIT 25;
Here's how I do it:
----INSERT IN CELL 1-----
import psycopg2
redshift_endpoint = "<add your endpoint>"
redshift_user = "<add your user>"
redshift_pass = "<add your password>"
port = <your port>
dbname = "<your db name>"
----INSERT IN CELL 2-----
from sqlalchemy import create_engine
from sqlalchemy import text
engine_string = "postgresql+psycopg2://%s:%s#%s:%d/%s" \
% (redshift_user, redshift_pass, redshift_endpoint, port, dbname)
engine = create_engine(engine_string)
----INSERT IN CELL 3 - THIS EXAMPLE WILL GET ALL TABLES FROM YOUR DATABASE-----
sql = """
select schemaname, tablename from pg_tables order by schemaname, tablename;
"""
----LOAD RESULTS AS TUPLES TO A LIST-----
tables = []
output = engine.execute(sql)
for row in output:
tables.append(row)
tables
--IF YOU'RE USING PANDAS---
raw_data = pd.read_sql_query(text(sql), engine)
The easiest way is to use this extension -
https://github.com/sat28/jupyter-redshift
The sample notebook shows how it loads redshift utility as an IPython Magic.
Edit 1
Support for writing back to redshift database has also been added.