How to execute file with multiple DDL statements - amazon-redshift

I'm running redshift_connector in Python under the Window environment. I have .sql file that is the collection of DDLs for the database.
when I execute I get the following error
'cannot insert multiple commands into a prepared statement'

You've probably found a solution for this already, but for the sake of closing the loop, here is mine:
I switched from using redshift_connector to using psycopg2, which does allow multiple commands.
If you need to use redshift_connector, however, there is a little workaround. It might mean updating some of your sql code:
Add a function to your python script which splits the sql into multiple commands, delimited by the ';' character, for example.
The only issue with this is where you need the ';' to mean ';'. For this reason, I added an escape argument, which is where you'd need to update your sql. This allows you to replace the ';' in the sql which the function then replaces with a ';' after it has split out the multiple commands.
Here's the function:
def rc_multicommand(rc_cursor,cs,escape=None):
    cs_split = cs.split(";")
    for split_command in cs_split:
        if not len(split_command) == 0:
            if not escape == None:
                split_command = split_command.replace(escape,';')
            rc_cursor.execute(split_command)

Related

Postgres - PreparedStatement.setString sending incorrect data

I'm trying to use select pg_catalog.hashtext(?) via JDBC PreparedStatement, and running into a weird behavior.
For most strings it works fine, e.g. the following randomly generated string:
"Fm_:VW:<jBGOl$K "
and I get the correct hash back: 641495800
But for some of the strings, it spits back a hash that doesn't match the value when I query directly the DB via psql or some other tool such as DataGrip.
For instance, this works fine:
"}F:d(2 dS8xt9KP0$~tYw;R(V"!2[7&Xs2Wj#5 k|F[}%.ZQ^93~
Cuk&93d!t8b|{4F&{1j{.;C},1s/b&wYZ Ckc5vqy|e+5&5EW%RQ6F0>R4#h.6$iU>{=kl!{e(CTH^DvN/<eG9 bjHx#9=&& G$W_Y =! j\q3T;[H.ve-~>S5j8eI.gWQmg. C!WpWK0z>f?^^LLMO:3R';!4eVxU2)~1F6Zs!p0 F'1b*G:xBO5cN{O'1P~
fj5g%IcT}]w ;;DlD Q~D=wT qN7zON]/J9Heh3qwJ #n qMTG\M7#h,8JUP3Sl}L:wb7#bRc&eIWp\z>HuwZI2Ej5;v7M _8DU.d?mvD| !rS!XS;8QQYh6D=BMJ5m2$>cR ob#'{dCOr#NzDk c!JtQbzCg&#dG:qtHy)O4 ohWQ`ed
2 O'HmHt\<SO
gHKAo`WIb"HF\LrpKKDsW -e##v%RS+,-61lze bd|tyl);A0h":O40O71b(0cDM57gTFL~[7ksp
_Nx:"
But this doesn't:
".4X$!S"s
3E&fJZP*yC#6 ii7^D%Nj3Qn(]:&ykP3(%9 Ww}| ZOmcZ:(w<d= On/m\)vfAEu)s:Yy<17:l9GImT!BgH,FG(:DanwL|3'#XS
a_+nwbqPYBu[DWW`VbBKzF%CnaYpH "
Now, I tried using Statement instead of PreparedStatement along with a String concatenated query, and that works fine, as long as I escape single-quote characters (') with two-single quotes ('') before executing the query. So it appears that somehow PreparedStatement.setString is doing something weird with the String that I pass to it.
Note: The reason I'm testing this with random strings is because my code needs to be able to work with any UTF-8 string that's thrown at it. This test only uses ASCII, and it's already failing in some cases. I don't want to use Statement as that opens up a whole different discussion.

Use of column names in Redshift COPY command which is a reserved keyword

I have a table in redshift where the column names are 'begin' and 'end'. They are Redshift keywords. I want to explicitly use them in the Redshift COPY command. Is there a workaround rather than renaming the column names in the table. That will be my last option.
I tried to enclose them within single/double quotes, but looks like the COPY command only accepts comma separated column names.
Copy command works fails if you don't escape keywords as column name. e.g. begin or end.
copy test1(col1,begin,end,col2) from 's3://example/file/data1.csv' credentials 'aws_access_key_id=XXXXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXXXX' delimiter ',';
ERROR: syntax error at or near "end"
But, it works fine if as begin and end are enclosed by double quote(") as below.
copy test1(col1,"begin","end",col2) from 's3://example/file/data1.csv' credentials 'aws_access_key_id=XXXXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXXXX' delimiter ',';
I hope it helps.
If there is some different error please update your question.

How do I use a variable in Postgres scripts?

I'm working on a prototype that uses Postgres as its backend. I don't do a lot of SQL, so I'm feeling my way through it. I made a .pgsql file I run with psql that executes each of many files that set up my database, and I use a variable to define the schema that will be used so I can test features without mucking up my "good" instance:
\set schema_name 'example_schema'
\echo 'The Schema name is' :schema_name
\ir sql/file1.pgsql
\ir sql/file2.pgsql
This has been working well. I've defined several functions that expand :schema_name properly:
CREATE OR REPLACE FUNCTION :schema_name.get_things_by_category(...
For reasons I can't figure out, this isn't working in my newest function:
CREATE OR REPLACE FUNCTION :schema_name.update_thing_details(_id uuid, _details text)
RETURNS text
LANGUAGE 'plpgsql'
AS $BODY$
BEGIN
UPDATE :schema_name.things
...
The syntax error indicates it's interpreting :schema_name literally after UPDATE instead of expanding it. How do I get it to use the variable value instead of the literal value here? I get that maybe within the BEGIN..END is a different context, but surely there's a way to script this schema name in all places?
I can think of three approaches, since psql cannot do this directly.
Shell script
Use a bash script to perform the variable substitution and pipe the results into psql, like.
#!/bin/bash
$schemaName = $1
$contents = `cat script.sql | sed -e 's/#SCHEMA_NAME#/$schemaName'`
echo $contents | psql
This would probably be a lot of boiler plate if you have a lot of .sql scripts.
Staging Schema
Keep the approach you have now with a hard-coded schema of something like staging and then have a bash script go and rename staging to whatever you want the actual schema to be.
Customize the search path
Your entry point could be an inline script within bash that is piped into psql, does an up-front update of the default connection schema, then uses \ir to include all of your .sql files, which should not specify a schema.
#!/bin/bash
$schemaName = $1
psql <<SCRIPT
SET search_path TO $schemaName;
\ir sql/file1.pgsql
\ir sql/file2.pgsql
SCRIPT
Some details: How to select a schema in postgres when using psql?
Personally I am leaning towards the latter approach as it seems the simplest and most scalable.
The documentation says:
Variable interpolation will not be performed within quoted SQL literals and identifiers. Therefore, a construction such as ':foo' doesn't work to produce a quoted literal from a variable's value (and it would be unsafe if it did work, since it wouldn't correctly handle quotes embedded in the value).
Now the function body is a “dollar-quoted%rdquo; string literal ($BODY$...$BODY$), so the variable will not be replaced there.
I can't think of a way to do this with psql variables.

How can I quote a named argument passed in to psql?

psql has a construct for passing named arguments:
psql -v name='value'
which can then be referenced inside a script:
SELECT :name;
which will give the result
?column?
----------
value
(1 row)
During development, I need to drop and recreate copies of the database fairly frequently, so I'm trying to automate the process. So I need to run a query that forcibly disconnects all users and then drops the database. But the database this operates on will vary, so the database name needs to be an argument.
The problem is that the query to disconnect the users requires a string (WHERE pg_stat_activity.datname = 'dbname') and the query that drops requires an unquoted token (DROP DATABASE IF EXISTS dbname). (Sorry. Not sure what to call that kind of token.)
I can use the named argument fine without quotes in the DROP query, but quoting the named argument in the disconnect query causes the argument to not be expanded. I.e., I would get the string ':name' instead of the string 'value'.
Is there any way to turn the unquoted value into a string or turn a string into an unquoted token for the DROP query? I can work around it by putting the disconnect and DROP queries in separate scripts and passing the argument in with quotes to the disconnect and without quotes to the DROP, but I'd prefer they were in the same script since they're really two steps in a single process.
Use:
... WHERE pg_stat_activity.datname = :'name'
Note the placement of the colon before the single quote.
The manual:
If an unquoted colon (:) followed by a psql variable name appears
within an argument, it is replaced by the variable's value, as
described in SQL Interpolation below. The forms
:'variable_name' and :"variable_name" described there work as well.
And:
To quote the value of a variable as an SQL literal, write a colon
followed by the variable name in single quotes.

Why won't this ISQL command run through Perl's DBI?

A while back I was looking for a way to insert values into a text field through isql
and eventually found some load command that worked out for me.
It doesn't work when I try to execute it from Perl. I get a syntax error. I have tried two separate methods and both are not working so far.
I have the SQL statement variable print out at the end of each loop cycle so I know that the syntax is correct, but just not getting across correctly.
Here's the latest snip of code I was testing:
foreach(#files)
{
$STMT = <<EOF;
load from $_ insert into some_table
EOF
$sth = $db1->prepare($STMT);
$sth->execute;
}
#files is an array whose elements are a full path/location of a pipe-delimited text file (ex. /home/xx/xx/xx/something.txt)
The number of columns in the table match the number of fields in the text file and the type-checking is fine (I've loaded test files manually without fail)
The error I get back is:
DBD::Informix::db prepare failed: SQL: -201: A syntax error has occurred.
Any idea what might be causing this?
EDIT to RET's & Petr's answers
$STMT = "'LOAD FROM $_ INSERT INTO table'";
system("echo $STMT | isql $db")
I had to change it to this, because the die command would force an unnatural death and the statement had to be wrapped in single quotes.
Petr is exactly right, the LOAD statement is an ISQL or DB-Access extension, so you can't execute it through DBI. If you have a look at the manual, you'll see it is also invalid syntax for SPL, ESQL/C and so on.
It's not clear whether you have to use perl to execute the script, or perl is just a convenient way of generating the SQL.
If the former, and you want a pure-perl method, you have to prepare an INSERT statement (there's just one table involved by the look of it?), and slurp through the file, using split to break it up into columns and executing the prepared insert.
Otherwise, you can generate the SQL using perl and execute it through DB-Access, either directly with system or by wrapping both in either a shell script or DOS batch file.
System call version
foreach (#files) {
my $stmt = "LOAD FROM $_ INSERT INTO table;\n";
system("echo $stmt | dbaccess $database")
|| die "Statement $stmt failed: $!\n";
}
In a batch script version, you could write all the SQL into a single script, ie:
perl -e 'while(#ARGV){shift; print "LOAD FROM '$_' INSERT INTO table;\n"}' file1 [ file2 ... ] > loadfiles.sql
isql database loadfiles.sql
NB, the comment about quotes on the filename is only relevant if the filename contains spaces or metacharacters, the usual issue.
Also, one key difference in behaviour between isql and dbaccess is that when executed in this manner, dbaccess does not stop on error, but isql will. To make dbaccess stop processing on error, set DBACCNOIGN=1 in the environment.
Hope that's helpful.
This is because your query is not SQL query, it is an isql command that tells isql to parse the input file and generate INSERT statements.
If you think about it, the server can be on a completely different machine and has no idea what file are you talking about and how to access it.
So you basically have two options:
call isql and pipe the LOAD command to it - very ugly
parse the file yourself and generate the INSERT statements
Please note that the file Notes/load.unload is distributed with DBD::Informix and contains guidelines on how to handle UNLOAD operations using Perl, DBI and DBD::Informix. Somewhat to my chagrin, I see that it says "T.B.D." (more or less) for the LOAD section.
As other people have stated, the LOAD and UNLOAD statements are faked by various client-side tools to look like SQL statements, but the Informix server does not support them itself, mainly because of the issue with getting the file from a client machine (perhaps a PC) to the server machine (perhaps a Solaris machine).
To simulate the LOAD statement, you would need to analyze the INSERT INTO Table part. If it lists columns (INSERT INTO Table(Col03, Col05, Col09)), then you can expect three values in the load data file, and they go into those three columns. You would prepare a statement 'SELECT Col03, Col05, Col09 FROM Table' to get the types of the columns. Otherwise, you need to prepare a statement 'SELECT * FROM Table' to get the complete list of columns (and their types). Given the column names and the number of columns, you can create and prepare a suitable insert statement: 'INSERT INTO Table(Col03, Col05, Col09) VALUES(?,?,?)' or 'INSERT INTO Table VALUES(?,?,?,?,?,?,?,?,?)'. You could (arguably should) include column names in the second one.
With that ready, you now have parse the unloaded data. There is a document available in the SQLCMD program available from the IIUG Software Archive (which has been around a lot longer than Microsoft's upstart program of the same name). That describes the UNLOAD format in considerable detail. Perl has the ability to handle anything Informix uses - witness the UNLOAD information in the load.unload file distributed with DBD::Informix.
A quick bit of Googling showed that the syntax for load puts quote marks around the file name. What if you change your statement to be:
load from '$_' insert into some_table
Since your statement is not using place holders, you have to put the quotes in yourself, as opposed to using the DBI quoting functionality.