Psycopg2 SQL Statment is working from the query Builder but not from Python - postgresql

The statement I want to use is the following:
UPDATE table SET colum1 = 1 WHERE colum2 LIKE '%gasse%';
when I use exactly this statement everything containing gasse gets updated but when I do this in python with Psycopg2:
sqlstring = """UPDATE table SET colum1 = 1 WHERE colum2 LIKE '%gasse%';"""
cur.execute(sqlstring)
colum1 does not get updated have i done something wrong did i not escape something correctly?

It is necessary to commit the transaction that is automatically opened by Psycopg before the first command
connection.commit()
http://initd.org/psycopg/docs/connection.html

Related

DataGrip: How to chop long 'returning' part?

I have the following problem with formatting a PostgreSQL query in Datagrip (and also in all other JetBrains products regarding PG queries):
update some_table
set some_column = 42
where id = 42
returning val_a as "valA",
val_b as "valB",
val_c as "valC",
val_d as "valD",
val_e as "valE",
val_f as "valF",
val_g as "valG";
When I now use the built-in SQL formatter, DataGrip produces this:
update some_table
set some_column = 42
where id = 42
returning val_a as "valA", val_b as "valB", val_c as "valC", val_d as "valD", val_e as "valE", val_f as "valF", val_g as "valG";
You see what the issue is: All returned values are in one line (it even ignores my max-line-length setting). I tried different settings in my IDE but to no avail. Note that for now, I don't care if the returned vals are indented or aligned or whatever, I just want the query to be "readable".
Looking forward to a solution and thanks in advance!
There is no option for that :( Please, file a request
https://youtrack.jetbrains.com/newIssue?project=DBE

pgAdmin argument formats can't be mixed

Background
Ubuntu 18.04
Postgresql 11.2 in Docker
pgAdmin4 3.5
Have a column named alias with type character varying[](64). Values have already been set on some rows before using psycopg2. Everything was alright then.
SQL = 'UPDATE public."mytable" SET alias=%s WHERE id=%s'
query = cursor.mogrify(SQL, ([values] , id))
cursor.execute(query)
conn.commit()
Recently, when I want to add more value using pgAdmin GUI as shown in the first figure, the error in the second figure happens, which says Argument formats can't be mixed:
Well, it turns out if insert the values using script such as psql or query tool in pgAdmin, the error does not happen, i.e., it only happens if using GUI of pgAdmin.
Example script:
UPDATE public."mytable" SET alias='{a, b}' WHERE id='myid'
But as the GUI is much easier to modify values, so really want to figure it out. Any idea?
It's a bug in pgAdmin 4.17.
It looks like it happens whenever you edit a char(n)[] or varchar(n)[] cell in a table (although char[] and varchar[] are unaffected).
It should be fixed in 4.18.
In the meantime, you can fix it yourself without much trouble. The pgAdmin4 backend is written in Python, so there's no need to rebuild anything; you can just dive in and change the source.
Find the directory where pgAdmin4 is installed, and open web/pgadmin/tools/sqleditor/__init__.py in an editor. Find the line:
typname = '%s(%s)[]'.format(
...and change it to:
typname = '{}({})[]'.format(
You'll need to restart the pgAdmin4 service for the change to take effect.
I wasn't able to get this working with the Character Varying data type but it worked once I converted the column data type to Text.

How to insert similar value into multiple locations of a psycopg2 query statement using dict? [duplicate]

I have a Python script that runs a pgSQL file through SQLAlchemy's connection.execute function. Here's the block of code in Python:
results = pg_conn.execute(sql_cmd, beg_date = datetime.date(2015,4,1), end_date = datetime.date(2015,4,30))
And here's one of the areas where the variable gets inputted in my SQL:
WHERE
( dv.date >= %(beg_date)s AND
dv.date <= %(end_date)s)
When I run this, I get a cryptic python error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) argument formats can't be mixed
…followed by a huge dump of the offending SQL query. I've run this exact code with the same variable convention before. Why isn't it working this time?
I encountered a similar issue as Nikhil. I have a query with LIKE clauses which worked until I modified it to include a bind variable, at which point I received the following error:
DatabaseError: Execution failed on sql '...': argument formats can't be mixed
The solution is not to give up on the LIKE clause. That would be pretty crazy if psycopg2 simply didn't permit LIKE clauses. Rather, we can escape the literal % with %%. For example, the following query:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%';
would need to be modified to:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%%';
More details in the pscopg2 docs: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
As it turned out, I had used a SQL LIKE operator in the new SQL query, and the % operand was messing with Python's escaping capability. For instance:
dv.device LIKE 'iPhone%' or
dv.device LIKE '%Phone'
Another answer offered a way to un-escape and re-escape, which I felt would add unnecessary complexity to otherwise simple code. Instead, I used pgSQL's ability to handle regex to modify the SQL query itself. This changed the above portion of the query to:
dv.device ~ E'iPhone.*' or
dv.device ~ E'.*Phone$'
So for others: you may need to change your LIKE operators to regex '~' to get it to work. Just remember that it'll be WAY slower for large queries. (More info here.)
For me it's turn out I have % in sql comment
/* Any future change in the testing size will not require
a change here... even if we do a 100% test
*/
This works fine:
/* Any future change in the testing size will not require
a change here... even if we do a 100pct test
*/

How to prevent the NULLS being removed?

I am currently using SQL Server 2008R2.
I am using this script:
SELECT a.productname, a.orderdate, a.workarea
FROM database1table1 AS a
WHERE a.orderdate >='2016/08/01'
Which gives the output:
PRODUCT NAME ORDER DATE WORKAREA
x 2016/08/07 NULL
y 2016/08/09 HOLDING
z 2016/08/10 ACTION
a 2016/08/12 ACTION
My problem arises when I amend the above script to read,
...
WHERE a.orderdate >='2016/08/01'
**AND a.workarea NOT IN ('HOLDING')**
When I do this, not only does it remove 'HOLDING', but it also removes the NULL rows as well, which I definitely do not want.
Please can you suggest an amendment to the script to prevent the NULLS being removed - I only want to see the value 'HOLDING' taken out.
With many thanks!
You can try a workaround
AND ISNULL(a.workarea,'') NOT IN ('HOLDING')
It will transform all null a.workarea in the "where" the "not in" works correctly

SQLAlchemy, Psycopg2 and Postgresql COPY

It looks like Psycopg has a custom command for executing a COPY:
psycopg2 COPY using cursor.copy_from() freezes with large inputs
Is there a way to access this functionality from with SQLAlchemy?
accepted answer is correct but if you want more than just the EoghanM's comment to go on the following worked for me in COPYing a table out to CSV...
from sqlalchemy import sessionmaker, create_engine
eng = create_engine("postgresql://user:pwd#host:5432/db")
ses = sessionmaker(bind=engine)
dbcopy_f = open('/tmp/some_table_copy.csv','wb')
copy_sql = 'COPY some_table TO STDOUT WITH CSV HEADER'
fake_conn = eng.raw_connection()
fake_cur = fake_conn.cursor()
fake_cur.copy_expert(copy_sql, dbcopy_f)
The sessionmaker isn't necessary but if you're in the habit of creating the engine and the session at the same time to use raw_connection you'll need separate them (unless there is some way to access the engine through the session object that I don't know). The sql string provided to copy_expert is also not the only way to it, there is a basic copy_to function that you can use with subset of the parameters that you could past to a normal COPY TO query. Overall performance of the command seems fast for me, copying out a table of ~20000 rows.
http://initd.org/psycopg/docs/cursor.html#cursor.copy_to
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.Engine.raw_connection
If your engine is configured with a psycopg2 connection string (which is the default, so either "postgresql://..." or "postgresql+psycopg2://..."), you can create a psycopg2 cursor from an SQL Alchemy session using
cursor = session.connection().connection.cursor()
which you can use to execute
cursor.copy_from(...)
The cursor will be active in the same transaction as your session currently is. If a commit or rollback happens, any further use of the cursor with throw a psycopg2.InterfaceError, you would have to create a new one.
You can use:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
It doesn't look like it.
You may have to just use psycopg2 to expose this functionality and forego the ORM capabilities. I guess I don't really see the benefit of ORM in such an operation anyway since it's a straight bulk insert and dealing with individual objects a la an ORM would not really make a whole lot of sense.
If you're starting from SQLAlchemy, you need to first get to the connection engine (also known by the property name bind on some SQLAlchemy objects):
engine = create_engine('postgresql+psycopg2://myuser:password#localhost/mydb')
# or
engine = session.engine
# or any other way you know to get to the engine
From the engine you can isolate a psycopg2 connection:
# get a psycopg2 connection
connection = engine.connect().connection
# get a cursor on that connection
cursor = connection.cursor()
Here are some templates for the COPY statement to use with cursor.copy_expert(), a more complete and flexible option than copy_from() or copy_to() as it is indicated here: https://www.psycopg.org/docs/cursor.html#cursor.copy_expert.
# to dump to a file
dump_to = """
COPY mytable
TO STDOUT
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
# to copy from a file:
copy_from = """
COPY mytable
FROM STDIN
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
Check out what the options above mean and others that may be of interest to your specific situation https://www.postgresql.org/docs/current/static/sql-copy.html.
IMPORTANT NOTE: The link to the documentation of cursor.copy_expert() indicates to use STDOUT to write out to a file and STDIN to copy from a file. But if you look at the syntax on the PostgreSQL manual, you'll notice that you can also specify the file to write to or from directly in the COPY statement. Don't do that, you're likely just wasting your time if you're not running as root (who runs Python as root during development?) Just do what's indicated in the psycopg2's docs and specify STDIN or STDOUT in your statement with cursor.copy_expert(), it should be fine.
# running the copy statement
with open('/path/to/your/data/file.csv') as f:
cursor.copy_expert(copy_from, file=f)
# don't forget to commit the changes.
connection.commit()
You don't need to drop down to psycopg2, use raw_connection nor a cursor.
Just execute the sql as usual, you can even use bind parameters with text():
engine.execute(text('''copy some_table from :csv
delimiter ',' csv'''
).execution_options(autocommit=True),
csv='/tmp/a.csv')
You can drop the execution_options(autocommit=True) if this PR will be accepted