Unable to insert data in PostgreSQL 11.0 table - postgresql

I would like to insert values into postgresql 11.0 table. However, when I am trying to do that I am getting following error:
TypeError: not all arguments converted during string formatting
I am running following code:
#CREATE TABLE
try:
connect_str = "dbname='xx' user='xx' host='xx' " "password='xx' port = xx"
conn = psycopg2.connect(connect_str)
except:
print("Unable to connect to the database")
cursor = conn.cursor()
cursor.execute("""DROP TABLE IF EXISTS tbl""")
try:
cursor.execute("""
CREATE TABLE IF NOT EXISTS tbl(
entry_id CHARACTER VARYING NOT NULL,
name CHARACTER VARYING NOT NULL,
class CHARACTER VARYING NOT NULL,
ko_id CHARACTER VARYING NOT NULL,
PRIMARY KEY (entry_id))
""")
except:
print("The table cannot be created!")
conn.commit()
conn.close()
cursor.close()
#INSERT DATA INTO TABLE
try:
connect_str = "dbname='xx' user='xx' host='xx' " "password='xx' port = xx"
conn = psycopg2.connect(connect_str)
except:
print("Unable to connect to the database")
cursor = conn.cursor()
with open ('file.txt') as f:
for line in f:
if re.match('^[A-Z]+',line) and line.startswith("ENTRY") or line.startswith("NAME") or line.startswith("CLASS") or line.startswith("KO_PATHWAY"):
key, value = line.split(" ", 1)
#print (key, value)
if key == "ENTRY":
cursor.execute("INSERT INTO tbl (entry_id) VALUES (%s)",('value'))
conn.commit()
conn.close()
cursor.close()
The key-value looks like this:
ENTRY map00010 Pathway
NAME Glycolysis / Gluconeogenesis
CLASS Metabolism; Carbohydrate metabolism
KO_PATHWAY ko00010
ENTRY map00011 Pathway
NAME Glycolysis
CLASS Metabolism; Carbohydrate
KO_PATHWAY ko00011
The value map00010 Pathway and map00011 Pathway should be inserted in the table and create two rows.
Any help is highly appreciated.

Related

R2DBC using Collection as parameter repository`s method for compare with uuid[]

DataBase: R2DBC Postgres
I have a column model_id at table with type: uuid[].
create table t_job
(
id uuid default gen_random_uuid() not null
primary key,
model_id uuid[] not null,
// -- ANOTHER COLUMN -- //
);
I need compare values from column model_id with Set<UUID>
#Query("""
SELECT case
WHEN COUNT(j) >= 1
THEN true
ELSE false
END
FROM t_job AS j
WHERE j.model_id IN :modelIdSet
AND j.state = 'done'
AND j.output_format = 'COLLISION'
""")
Mono<Boolean> isCollisionJobDoneBySeveralModelsId(String modelIdSet);
OUTPUT:
"debugMessage": "executeMany; bad SQL grammar [ SELECT case\n WHEN COUNT(j) >= 1\n THEN true\n ELSE false\n END\n FROM runner_processing_service.t_job AS j\n WHERE j.model_id IN :modelIdSet\n AND j.state = 'done'\n AND j.output_format = 'COLLISION'\n]; nested exception is io.r2dbc.postgresql.ExceptionFactory$PostgresqlBadGrammarException: [42601] syntax error at or near "$1""
How correct insert and compare values from uuid[] column with Set<UUID>
I try convert Set to String type and give that string to repository`s method, but it is not work to.
This query is work correct from console
enter image description here

Proper syntax for upsert insert update psycopg2

I have a table that I created in postgresql:
> CREATE TABLE issuer(
> cik char(10) NOT null ,issuer_name char(150) NOT NULL ,trading_symbol char(10) NOT
> NULL ,SIC char(6) NOt NULL
> ,date_added timestamp NULL DEFAULT
> CURRENT_TIMESTAMP ,CONSTRAINT issuer_pk PRIMARY key (cik) );
I am trying to either update a row if it exists or insert it if it doesn't.
I have searched the documentation on how to make this work, but I am baffled by the errors I get.
I have a function that I call
io = postgres_update_issuer(con,cur,cik,coname,ticker,'')
When I call this function, python calls threading and then quits.
Here is the function I call:
def postgres_update_issuer(conn,cur,issuer_cik,name,ticker,sic):
sql = """
INSERT INTO issuer ( cik,issuer_name,trading_symbol,SIC)
VALUES (%s,%s,%s,%s)
ON CONFLICT (cik)
DO UPDATE SET
(issuer_name,trading_symbol,SIC )
= (EXCLUDED.issuer_name, EXCLUDED.trading_symbol, EXCLUDED.SIC)
;"""
try:
# data = (issuer_cik,name,ticker,sic)
cur.execute(sql,(issuer_cik,name,ticker,sic) )
return True
except (Exception, psycopg2.DatabaseError) as error:
print(error)
When I change the function to this, I get the couldn't move all fields error message:
def postgres_update_issuer(conn,cur,issuer_cik,name,ticker,sic):
sql = """
INSERT INTO issuer ( cik,issuer_name,trading_symbol,SIC)
VALUES (%s)
ON CONFLICT (cik)
DO UPDATE SET
(issuer_name,trading_symbol,SIC )
= (EXCLUDED.issuer_name, EXCLUDED.trading_symbol, EXCLUDED.SIC)
;"""
try:
data = (issuer_cik,name,ticker,sic)
cur.execute(sql,(data )
return True
except (Exception, psycopg2.DatabaseError) as error:
print(error)
What is the correct way to do this. I am using python 3.6 psycopg2, and postgresql 10

PostgreSQL complains about inexistent comparison function for element in primary key

I have a table in a PostgreSQL database in which I want to store the following columns:
STATION LOCATION SERVICE NORTH EAST
text point text real real
Each tuple(STATION, LOCATION, SERVICE) is unique, so I decided to make it a composite type and make it the primary key.
However, when I try to insert a new entry in the database I get the following error:
psycopg2.ProgrammingError: could not identify a comparison function for type point
I guess it is complaining that you cannot order two points in a 2D plane, but I cannot see how that is relevant. I have managed to use composite types that made use of points as primary keys in a test example, so I cannot see how this is different.
I want to know:
Why this is happening.
How it can be fixed, preferrably without changing the table schema.
Debugging information:
testdb=> \d ERROR_KEY
Composite type "public.error_key"
Column | Type | Modifiers
----------+-------+-----------
station | text |
location | point |
service | text |
testdb=> \d testtable
Table "public.testtable"
Column | Type | Modifiers
--------+-----------+-----------
key | error_key | not null
north | real |
east | real |
Indexes:
"testtable_pkey" PRIMARY KEY, btree (key)
For reference, this is the code I am using for the insertion:
from collections import namedtuple
import psycopg2
DB_NAME = 'testdb'
DB_USER = 'testuser'
DB_HOST = 'localhost'
DB_PASSWORD = '123456'
PVT_TABLE_NAME = 'testtable'
Coordinate = namedtuple('Coordinate', ['lat', 'lon'])
PVT_Error_Key = namedtuple('PVT_Error_Key',
['station', 'location', 'service'])
PVT_Error_Entry = namedtuple(
'PVT_Error_Entry', ['key', 'north', 'east'])
def _adapt_coordinate(coord):
"""
Adapter from Python class to Postgre geometric point
"""
lat = psycopg2.extensions.adapt(coord.lat)
lon = psycopg2.extensions.adapt(coord.lon)
return psycopg2.extensions.AsIs("'(%s, %s)'" % (lat, lon))
def _connect_to_db(db_name, db_user, db_host, db_password):
"""
Connects to a database and returns a cursor object to handle the connection
"""
connection_str = ('dbname=\'%s\' user=\'%s\' host=\'%s\' password=\'%s\''
% (db_name, db_user, db_host, db_password))
return psycopg2.connect(connection_str).cursor()
def main():
# Register the adapter for the location
psycopg2.extensions.register_adapter(Coordinate, _adapt_coordinate)
cursor = _connect_to_db(DB_NAME, DB_USER, DB_HOST, DB_PASSWORD)
# Create a dummy entry
entry = PVT_Error_Entry(
key=PVT_Error_Key(station='GKIR',
location=Coordinate(lat=12, lon=10),
service='E1'),
north=1, east=2)
# Insert the dummy entry in the database
cursor.execute(
'INSERT INTO %s '
'(KEY, NORTH, EAST) '
'VALUES((%%s, %%s, %%s), %%s, %%s)'
% PVT_TABLE_NAME,
(entry.key.station, entry.key.location, entry.key.service,
entry.north, entry.east))
# Retrieve and print all entries of the database
cursor.execute('SELECT * FROM %s', (PVT_TABLE_NAME))
rows = cursor.fetchall()
print(rows)
if __name__ == '__main__':
main()
You cannot use a column of type point in a primary key, e.g.:
create table my_table(location point primary key);
ERROR: data type point has no default operator class for access method "btree"
HINT: You must specify an operator class for the index or define a default operator class for the data type.
The error message is clear enough, you need to create a complete btree operator class for the type.
The full procedure is described in this answer: Creating custom “equality operator” for PostgreSQL type (point) for DISTINCT calls.
Update. With the workaround you mentioned in your comment
create table my_table(
x numeric,
y numeric,
primary key (x, y));
insert into my_table values
(1.1, 1.2);
you can always create a view, which can be queried just like a table:
create view my_view as
select point(x, y) as location
from my_table;
select *
from my_view;
location
-----------
(1.1,1.2)
(1 row)

passing numeric where parameter condition in postgress using python

I am trying to use postgresql in Python.The query is against a numeric field value in the where condition. The result set is not fetching and giving error ("psycopg2.ProgrammingError: no results to fetch").There are records in the database with agent_id (integer field) > 1.
import psycopg2
# Try to connect
try:
conn=psycopg2.connect("dbname='postgres' host ='localhost')
except:
print "Error connect to the database."
cur = conn.cursor()
agentid = 10000
try:
sql = 'SELECT * from agent where agent_id > %s::integer;'
data = agentid
cur.execute(sql,data)
except:
print "Select error"
rows = cur.fetchall()
print "\nRows: \n"`
for row in rows:``
print " ", row[9]
Perhaps try these things in your code:
conn=psycopg2.connect("dbname=postgres host=localhost user=user_here password=password_here port=port_num_here")
sql = 'SELECT * from agent where agent_id > %s;'
data = (agentid,) # A single element tuple.
then use
cur.execute(sql,data)
Also, I am confused here to what you want to do with this code
for row in rows:``
print " ", row[9]
Do you want to print each row in rows or just the 8th index of rows, from
rows = cur.fetchall()
If you wanted that index, you could
print rows[9]

Psycopg2 insert python dictionary in postgres database

In python 3+, I want to insert values from a dictionary (or pandas dataframe) into a database. I have opted for psycopg2 with a postgres database.
The problems is that I cannot figure out the proper way to do this. I can easily concatenate a SQL string to execute, but the psycopg2 documentation explicitly warns against this. Ideally I wanted to do something like this:
cur.execute("INSERT INTO table VALUES (%s);", dict_data)
and hoped that the execute could figure out that the keys of the dict matches the columns in the table. This did not work. From the examples of the psycopg2 documentation I got to this approach
cur.execute("INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%s" for pair in dict_data]) + ");", dict_data)
from which I get a
TypeError: 'dict' object does not support indexing
What is the most phytonic way of inserting a dictionary into a table with matching column names?
Two solutions:
d = {'k1': 'v1', 'k2': 'v2'}
insert = 'insert into table (%s) values %s'
l = [(c, v) for c, v in d.items()]
columns = ','.join([t[0] for t in l])
values = tuple([t[1] for t in l])
cursor = conn.cursor()
print cursor.mogrify(insert, ([AsIs(columns)] + [values]))
keys = d.keys()
columns = ','.join(keys)
values = ','.join(['%({})s'.format(k) for k in keys])
insert = 'insert into table ({0}) values ({1})'.format(columns, values)
print cursor.mogrify(insert, d)
Output:
insert into table (k2,k1) values ('v2', 'v1')
insert into table (k2,k1) values ('v2','v1')
I sometimes run into this issue, especially with respect to JSON data, which I naturally want to deal with as a dict. Very similar. . .But maybe a little more readable?
def do_insert(rec: dict):
cols = rec.keys()
cols_str = ','.join(cols)
vals = [ rec[k] for k in cols ]
vals_str = ','.join( ['%s' for i in range(len(vals))] )
sql_str = """INSERT INTO some_table ({}) VALUES ({})""".format(cols_str, vals_str)
cur.execute(sql_str, vals)
I typically call this type of thing from inside an iterator, and usually wrapped in a try/except. Either the cursor (cur) is already defined in an outer scope or one can amend the function signature and pass a cursor instance in. I rarely insert just a single row. . .And like the other solutions, this allows for missing cols/values provided the underlying schema allows for it too. As long as the dict underlying the keys view is not modified as the insert is taking place, there's no need to specify keys by name as the values will be ordered as they are in the keys view.
[Suggested answer/workaround - better answers are appreciated!]
After some trial/error I got the following to work:
sql = "INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%("+k+")s" for k in dict_data]) + ");"
This gives the sql string
"INSERT INTO table (k1, k2, ... , kn) VALUES (%(k1)s, %(k2)s, ... , %(kn)s);"
which may be executed by
with psycopg2.connect(database='deepenergy') as con:
with con.cursor() as cur:
cur.execute(sql, dict_data)
Post/cons?
using %(name)s placeholders may solve the problem:
dict_data = {'key1':val1, 'key2':val2}
cur.execute("""INSERT INTO table (field1, field2)
VALUES (%(key1)s, %(key2)s);""",
dict_data)
you can find the usage in psycopg2 doc Passing parameters to SQL queries
Here is another solution inserting a dictionary directly
Product Model (has the following database columns)
name
description
price
image
digital - (defaults to False)
quantity
created_at - (defaults to current date)
Solution:
data = {
"name": "product_name",
"description": "product_description",
"price": 1,
"image": "https",
"quantity": 2,
}
cur = conn.cursor()
cur.execute(
"INSERT INTO products (name,description,price,image,quantity) "
"VALUES(%(name)s, %(description)s, %(price)s, %(image)s, %(quantity)s)", data
)
conn.commit()
conn.close()
Note: The columns to be inserted is specified on the execute statement .. INTO products (column names to be filled) VALUES ..., data <- the dictionary (should be the same **ORDER** of keys)