Background
Ubuntu 18.04
Postgresql 11.2 in Docker
pgAdmin4 3.5
Have a column named alias with type character varying[](64). Values have already been set on some rows before using psycopg2. Everything was alright then.
SQL = 'UPDATE public."mytable" SET alias=%s WHERE id=%s'
query = cursor.mogrify(SQL, ([values] , id))
cursor.execute(query)
conn.commit()
Recently, when I want to add more value using pgAdmin GUI as shown in the first figure, the error in the second figure happens, which says Argument formats can't be mixed:
Well, it turns out if insert the values using script such as psql or query tool in pgAdmin, the error does not happen, i.e., it only happens if using GUI of pgAdmin.
Example script:
UPDATE public."mytable" SET alias='{a, b}' WHERE id='myid'
But as the GUI is much easier to modify values, so really want to figure it out. Any idea?
It's a bug in pgAdmin 4.17.
It looks like it happens whenever you edit a char(n)[] or varchar(n)[] cell in a table (although char[] and varchar[] are unaffected).
It should be fixed in 4.18.
In the meantime, you can fix it yourself without much trouble. The pgAdmin4 backend is written in Python, so there's no need to rebuild anything; you can just dive in and change the source.
Find the directory where pgAdmin4 is installed, and open web/pgadmin/tools/sqleditor/__init__.py in an editor. Find the line:
typname = '%s(%s)[]'.format(
...and change it to:
typname = '{}({})[]'.format(
You'll need to restart the pgAdmin4 service for the change to take effect.
I wasn't able to get this working with the Character Varying data type but it worked once I converted the column data type to Text.
Related
I'm trying to run a function in a PostgreSQL 11 server from Ignition (8.0.16) as a named-query and getting a column index error. Everywhere that has discussed this error with regards to Postgres shows it is an issue of the parameters provided and expected not matching in number.
It always shows as one more than the number provided as being out of range. Even when changed to use a different number of parameters.
I count 13 everywhere: Ignition parameters, test parameters, the function call in Ignition, the function definition, the table. Here is the function call from Ignition:
SELECT insert_run_data(
:speed_in,
:avg_speed_in,
:coater_num_in,
:coater_op_in,
:finisher_in,
:helper1_in,
:helper2_in,
:coater_down_in,
:current_downtime_reason_in,
:hanging_downtime_reason_in,
:tabcode_in,
:start_time_in,
:end_time_in
);
In the same named-query window, if I comment out the function call and try to write directly using the same parameters, it writes without issue:
insert into
nh_coater_tabcode_operator_data(
speed, avg_speed, coater_num, coater_op, finisher, helper1,
helper2, coater_down, current_downtime_reason,
hanging_downtime_reason, tabcode, start_time, end_time
)
values(:speed_in, :avg_speed_in, :coater_num_in, :coater_op_in,
:finisher_in, :helper1_in, :helper2_in, :coater_down_in,
:current_downtime_reason_in, :hanging_downtime_reason_in, :tabcode_in,
:start_time_in, :end_time_in
);
The function also runs fine from within PGAdmin.
Here are gists showing the SQL used to create the table and function, the stack trace from Ignition, and an image showing the named-query authoring window parameters matching:
create function gist
create table gist
stack trace of error gist
parameters in Ignition
The Ignition Designer seems to cache the function. So, if it is changed, you will need to save and reopen the project: open another then switch back or close and open a new designer window.
I Have created a stored procedure:
#DeviceID nvarchar(20) =''
WITH EXECUTE AS CALLER
AS
SELECT
amd.BRANDID,
amd.DEVICEID
FROM AMDEVICETABLE amd
where
left(amd.Deviceid,len(#DeviceID)) in (#DeviceID)
The length of amd.Deviceid is about 15 characters
In Visual Studio I create a parameter #DeviceID and when I am entering e.g ABCDE ( the first 5 characters from Deviceid) everything is working perfect.
the problem is that I want to put multiple values like
jhmcl*, jhmgd*.
So I created my own little version of your report and I believe the problem is your LEN() function. I'm surprised it doesn't return an error because it errors out in Report Builder for SQL Server 2014(simple version of SSRS). I would test what your LEN(#DeviceID) is returning. I would bet it's not returning the correct value. Instead you might try this to cover every possible pattern. I don't know how it will work performance wise.
SELECT DeviceID
FROM YourTable
WHERE LEN(DeviceID,1) IN (#DeviceID)
OR LEN(DeviceID,2) IN (#DeviceID)
OR LEN(DeviceID,3) IN (#DeviceID)
..
OR LEN(DeviceID,15),IN(#DeviceID)
It looks like Psycopg has a custom command for executing a COPY:
psycopg2 COPY using cursor.copy_from() freezes with large inputs
Is there a way to access this functionality from with SQLAlchemy?
accepted answer is correct but if you want more than just the EoghanM's comment to go on the following worked for me in COPYing a table out to CSV...
from sqlalchemy import sessionmaker, create_engine
eng = create_engine("postgresql://user:pwd#host:5432/db")
ses = sessionmaker(bind=engine)
dbcopy_f = open('/tmp/some_table_copy.csv','wb')
copy_sql = 'COPY some_table TO STDOUT WITH CSV HEADER'
fake_conn = eng.raw_connection()
fake_cur = fake_conn.cursor()
fake_cur.copy_expert(copy_sql, dbcopy_f)
The sessionmaker isn't necessary but if you're in the habit of creating the engine and the session at the same time to use raw_connection you'll need separate them (unless there is some way to access the engine through the session object that I don't know). The sql string provided to copy_expert is also not the only way to it, there is a basic copy_to function that you can use with subset of the parameters that you could past to a normal COPY TO query. Overall performance of the command seems fast for me, copying out a table of ~20000 rows.
http://initd.org/psycopg/docs/cursor.html#cursor.copy_to
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.Engine.raw_connection
If your engine is configured with a psycopg2 connection string (which is the default, so either "postgresql://..." or "postgresql+psycopg2://..."), you can create a psycopg2 cursor from an SQL Alchemy session using
cursor = session.connection().connection.cursor()
which you can use to execute
cursor.copy_from(...)
The cursor will be active in the same transaction as your session currently is. If a commit or rollback happens, any further use of the cursor with throw a psycopg2.InterfaceError, you would have to create a new one.
You can use:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
It doesn't look like it.
You may have to just use psycopg2 to expose this functionality and forego the ORM capabilities. I guess I don't really see the benefit of ORM in such an operation anyway since it's a straight bulk insert and dealing with individual objects a la an ORM would not really make a whole lot of sense.
If you're starting from SQLAlchemy, you need to first get to the connection engine (also known by the property name bind on some SQLAlchemy objects):
engine = create_engine('postgresql+psycopg2://myuser:password#localhost/mydb')
# or
engine = session.engine
# or any other way you know to get to the engine
From the engine you can isolate a psycopg2 connection:
# get a psycopg2 connection
connection = engine.connect().connection
# get a cursor on that connection
cursor = connection.cursor()
Here are some templates for the COPY statement to use with cursor.copy_expert(), a more complete and flexible option than copy_from() or copy_to() as it is indicated here: https://www.psycopg.org/docs/cursor.html#cursor.copy_expert.
# to dump to a file
dump_to = """
COPY mytable
TO STDOUT
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
# to copy from a file:
copy_from = """
COPY mytable
FROM STDIN
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
Check out what the options above mean and others that may be of interest to your specific situation https://www.postgresql.org/docs/current/static/sql-copy.html.
IMPORTANT NOTE: The link to the documentation of cursor.copy_expert() indicates to use STDOUT to write out to a file and STDIN to copy from a file. But if you look at the syntax on the PostgreSQL manual, you'll notice that you can also specify the file to write to or from directly in the COPY statement. Don't do that, you're likely just wasting your time if you're not running as root (who runs Python as root during development?) Just do what's indicated in the psycopg2's docs and specify STDIN or STDOUT in your statement with cursor.copy_expert(), it should be fine.
# running the copy statement
with open('/path/to/your/data/file.csv') as f:
cursor.copy_expert(copy_from, file=f)
# don't forget to commit the changes.
connection.commit()
You don't need to drop down to psycopg2, use raw_connection nor a cursor.
Just execute the sql as usual, you can even use bind parameters with text():
engine.execute(text('''copy some_table from :csv
delimiter ',' csv'''
).execution_options(autocommit=True),
csv='/tmp/a.csv')
You can drop the execution_options(autocommit=True) if this PR will be accepted
I'm trying to analyze how postgreSQL parse a query, and after some postgreSQL source code tracing with embedding printf() here and there, I've known that the query will be parsed into raw parse tree with raw_parser, which located in file parser.c.
The strange thing is, I've already embedded a printf() dummy in the raw_parser, and after re-installing the postgreSQL and execute a query, my printf() dummy is not printed to the screen!
Can anybody please help me, where I went wrong?
Thanks in advance :D
if you use printf(stderr, "...."), then you can find result in server log. Don't forget - you are not work with server directly. For debugging purposes there are a elog function - it's like printf for client application:
elog(NOTICE, "some text");
a format string is same like printf's format - but you must to remember, PostgreSQL uses a different formats than glibc - so you can to show only integer or float variables. String variables uses different format than is C zero finished string.
I'm using Pdo_Mssql adapter against a Sybase database and working around issues encountered. One pesky issue remaining is Zend_Db's instance on quoting BIT field values. When running the following for an insert:
$row = $this->createRow();
...
$row->MyBitField = $data['MyBitField'];
...
$row->save();
FreeTDS log output shows:
dbutil.c:87:msgno 257: "Implicit conversion from datatype 'VARCHAR' to 'BIT' is not allowed. Use the CONVERT function to run this query.
I've tried casting values as int and bool, but this seems to be a table metadata problem, not a data type problem with input.
Fortunately, Zend_Db_Expr works nicely. The following works, but I'd like to be database server agnostic.
$row->MyBitField = new Zend_Db_Expr("CONVERT(BIT, {$data['MyBitField']})");
I've verified that the describeTable() is returning BIT for the field. Any ideas on how to get ZF to stop quoting MS SQL/Sybase BIT fields?
You can simply try this (works for mysql bit type):
$row->MyBitField = new Zend_Db_Expr($data['MyBitField']);