The sql works fine but with python it doesn't insert values into table - postgresql

I'm trying to use python for stored procs in sql, I have tested my sql code and it works fine, but when I execute it via python, the values are not inserted into my table
Note: I don't have any errors when executing
My code below:
import psycopg2
con = psycopg2.connect(dbname='dbname'
, host='host'
, port='5439', user='username', password='password')
def executeScriptsfromFile(filename):
#Open and read the file as a single buffer
cur = con.cursor()
fd = open(filename,'r')
sqlFile = fd.read()
fd.close()
#all SQL commands(split on ';')
sqlCommands = filter(None,sqlFile.split(';'))
#Execute every command from the input file
for command in sqlCommands:
# This will skip and report errors
# For example, if the tables do not yet exist, this will skip over
# the DROP TABLE commands
try:
cur.execute(command)
con.commit()
except Exception as inst:
print("Command skipped:", inst)
cur.close()
executeScriptsfromFile('filepath.sql')
Insert comment in sql:
INSERT INTO schema.users
SELECT
UserId
,Country
,InstallDate
,LastConnectDate
FROM #Source;
Note: As I said the sql works perfectly fine when I tested it.

Related

How can I get Technical Names of my Fields Using COPY Command

I wrote my code, I am getting all the data from my Postgres Database but fields technical names stored in my database are not coming under my xlsx file??
Below is my code:
conn = psycopg2.connect("dbname=cush2 user=tryton50 password=admin host=localhost")
cur = conn.cursor()
sql = "COPY (SELECT * FROM cushing_syndrome) TO STDOUT WITH CSV DELIMITER ','"
with open("/home/cf/Desktop/result_1.xlsx", "w") as file:
cur.copy_expert(sql, file)

Can't get mysql_query to work, mysql_error displays nothing

Recently converted from mysql to mysqli and things were working fine. Connect, select still work fine but now one query fails. Here is the basic code:
Connect to database ($con) - successful.
mysqli_query to select some data from table1 (fn1, ln1, yr1) - successful.
Table data goes to $fn, $ln, $yr after mysql_fetch_array - successful.
Use the data to form an insert:
$sql = "insert into table2 (fn2, ln2, yr2) values ('$fn', '$ln', '$yr')";
mysql-query($con, $sql) or die ("Insert failed: " . mysqli_error($con));
The query fail with the Insert failed message but no reason from mysql_error.
What have I missed?
I've try it. It work. Insert it's ok!
$link = new mysqli($dbhost, $dbuser, $dbpass, $dbname);
$qry = "insert into reputazione (iduser,star,votante,commento)
values(10,5,\"pippo\",\"commento\")";
mysqli_query($link, $qry);
if (mysqli_affected_rows($link) > 0) {
echo "ok!";
}
Okay, I solved the problem by coding all the mysqli commands in-line and not calling functions and passing the sql statements to them. I had functions for connecting to the DB, selecting from one table, inserting into another table and then deleting from the first table.

How to get data out of a postgres bytea column into a python variable using sqlalchemy?

I am working with the script below.
If I change the script so I avoid the bytea datatype, I can easily copy data from my postgres table into a python variable.
But if the data is in a bytea postgres column, I encounter a strange object called memory which confuses me.
Here is the script which I run against anaconda python 3.5.2:
# bytea.py
import sqlalchemy
# I should create a conn
db_s = 'postgres://dan:dan#127.0.0.1/dan'
conn = sqlalchemy.create_engine(db_s).connect()
sql_s = "drop table if exists dropme"
conn.execute(sql_s)
sql_s = "create table dropme(c1 bytea)"
conn.execute(sql_s)
sql_s = "insert into dropme(c1)values( cast('hello' AS bytea) );"
conn.execute(sql_s)
sql_s = "select c1 from dropme limit 1"
result = conn.execute(sql_s)
print(result)
# <sqlalchemy.engine.result.ResultProxy object at 0x7fcbccdade80>
for row in result:
print(row['c1'])
# <memory at 0x7f4c125a6c48>
How to get the data which is inside of memory at 0x7f4c125a6c48 ?
You can cast it use python bytes()
for row in result:
print(bytes(row['c1']))

Inserting array containing json objects as rows in postgres 9.5

Just started using PostgreSQL 9.5 and have ran into my first problem with jsonb column. I have been trying to find an answer to this for a while but failing badly. Can someone help?
I have a json array in python containing json objects like this:
[{"name":"foo", "age":"18"}, {"name":"bar", "age":"18"}]
I'm trying to insert this into a jsonb column like this:
COPY person(person_jsonb) FROM '/path/to/my/json/file.json';
But only 1 row gets inserted. I hope to have each json object in the array as a new row like this:
1. {"name":"foo", "age":"18"}
2. {"name":"bar", "age":"18"}
Also tried:
INSERT INTO person(person_jsonb)
VALUES (%s)
,(json.dumps(data['person'])
Still only one row gets inserted. Can someone please help??
EDIT: Added python code as requested
import psycopg2, sys, json
con = None
orders_file_path = '/path/to/my/json/person.json'
try:
with open(orders_file_path) as data_file:
data = json.load(data_file)
con = psycopg2.connect(...)
cur = con.cursor()
person = data['person']
cur.execute("""
INSERT INTO orders(orders_jsonb)
VALUES (%s)
""", (json.dumps(person), ))
con.commit()
except psycopg2.DatabaseError, e:
if con:
con.rollback()
finally:
if con:
con.close()
person.json file:
{"person":[{"name":"foo", "age":"18"}, {"name":"bar", "age":"18"}]}
Assuming the simplest schema:
CREATE TABLE test(data jsonb);
Option 1: parse the JSON in Python
You need to insert each row in PostgreSQL apart, you could parse the JSON on Python side and split the upper level array, then use cursor.executemany to execute the INSERT with each json data already split:
import json
import psycopg2
con = psycopg2.connect('...')
cur = con.cursor()
data = json.loads('[{"name":"foo", "age":"18"}, {"name":"bar", "age":"18"}]')
with con.cursor() as cur:
cur.executemany('INSERT INTO test(data) VALUES(%s)', [(json.dumps(d),) for d in data])
con.commit()
con.close()
Option 2: parse the JSON in PostgreSQL
Another option is to push the JSON processing into PostgreSQL side using json_array_elements:
import psycopg2
con = psycopg2.connect('...')
cur = con.cursor()
data = '[{"name":"foo", "age":"18"}, {"name":"bar", "age":"18"}]'
with con.cursor() as cur:
cur.execute('INSERT INTO test(data) SELECT * FROM json_array_elements(%s)', (data,))
con.commit()
con.close()

Use python to execute line in postgresql

I have imported one shapefile named tc_bf25 using qgis, and the following is my python script typed in pyscripter,
import sys
import psycopg2
conn = psycopg2.connect("dbname = 'routing_template' user = 'postgres' host = 'localhost' password = '****'")
cur = conn.cursor()
query = """
ALTER TABLE tc_bf25 ADD COLUMN source integer;
ALTER TABLE tc_bf25 ADD COLUMN target integer;
SELECT assign_vertex_id('tc_bf25', 0.0001, 'the_geom', 'gid')
;"""
cur.execute(query)
query = """
CREATE OR REPLACE VIEW tc_bf25_ext AS
SELECT *, startpoint(the_geom), endpoint(the_geom)
FROM tc_bf25
;"""
cur.execute(query)
query = """
CREATE TABLE node1 AS
SELECT row_number() OVER (ORDER BY foo.p)::integer AS id,
foo.p AS the_geom
FROM (
SELECT DISTINCT tc_bf25_ext.startpoint AS p FROM tc_bf25_ext
UNION
SELECT DISTINCT tc_bf25_ext.endpoint AS p FROM tc_bf25_ext
) foo
GROUP BY foo.p
;"""
cur.execute(query)
query = """
CREATE TABLE network1 AS
SELECT a.*, b.id as start_id, c.id as end_id
FROM tc_bf25_ext AS a
JOIN node AS b ON a.startpoint = b.the_geom
JOIN node AS c ON a.endpoint = c.the_geom
;"""
cur.execute(query)
query = """
ALTER TABLE network1 ADD COLUMN shape_leng double precision;
UPDATE network1 SET shape_leng = length(the_geom)
;"""
cur.execute(query)
I got the error at the second cur.execute(query),
But I go to pgAdmin to check result, even though no error occurs, the first cur.execute(query) didn't add new columns in my table.
What mistake did I make? And how to fix it?
I am working with postgresql 8.4, python 2.7.6 under Windows 8.1 x64.
When using psycopg2, autocommit is set to False by default. The first two statements both refer to table tc_bf25, but the first statement makes an uncommitted change to the table. So try running conn.commit() between statements to see if this resolves the issue
You should run each statement individually. Do not combine multiple statements into a semicolon separated series and run them all at one. It makes error handling and fetching of results much harder.
If you still have the problem once you've made that change, show the exact statement you're having the problem with.
Just to add to #Talvalin you can enable auto-commit by adding
psycopg2.connect("dbname='mydb',user='postgres',host ='localhost',password = '****'")
conn.autocommit = True
after you connect to your database using psycopg2