Perl DBI: how to see failed query with bound values? - perl

This is a standard inserting example from DBI manual:
my $query = q{
INSERT INTO sales (product_code, qty, price) VALUES (?, ?, ?)
};
my $sth = $dbh->prepare($query) or die $dbh->errstr;
while (<>) {
chomp;
my ($product_code, $qty, $price) = split /,/;
$sth->execute($product_code, $qty, $price) or die ($query . " " . $dbh->errstr);
}
$dbh->commit or die $dbh->errstr;
I modified it a bit, so I can see on die which query failed (die ($query . " " . $dbh->errstr)). Still I'd liked to see the query with bound values (as it was executed). How to get it?
Edit
Btw, i found a awkward way to see query with bound values too: you have to make syntax error in query. For example, if i change query above like that:
my $query = q{
xINSERT INTO sales (product_code, qty, price) VALUES (?, ?, ?)
};
I got it back as i wanted:
DBD::mysql::st execute failed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'xINSERT INTO sample (product_code, qty, price) VALUES ('1', '2', '3')' at line 1
Sometimes it really helps. At least it did to me.

You can use DBI's ParamValues to obtain the parameter values but you are unlikely to find any method in a DBD to obtain the parameters actually in the SQL because they are mostly sent to the database after the SQL is parsed. You can look at DBIx::Log4perl to see how ParamValues is used in the error handler. You might also find some parts of DBIx::Log4perl useful.

There isn't a standard way to do that. The nearest approximation would be to substitute each placeholder by the (probably quoted) value. In general, it is fairly hard to reliably parse an SQL statement to find the question marks that are placeholders rather than parts of delimited identifiers or strings or comments. In the example, you can simply look for question marks; that won't always work, though:
INSERT /* ? */ INTO "??".Sales VALUES('?', ?, ?, ?);
Note that with most, but not necessarily all, DBMS (and hence most DBD drivers), the statement is sent to the DBMS when it is prepared; only the values are sent when the statement is executed. There never is a statement created with all the values substituted into the VALUES list, so neither DBI nor the DBMS should be expected to create one.

There isn't any generically supported DBI way to do this, I think, but individual database drivers might allow it. What database are you using?

Related

only taking certain values from a list in perl

First I will describe what I have, then the problem.
I have a text file that is structured as such
----------- Start of file-----
<!-->
name,name2,ignore,name4,jojobjim,name3,name6,name9,pop
-->
<csv counter="1">
1,2,3,1,6,8,2,8,2,
2,6,5,1,5,8,7,7,9,
1,4,3,1,2,8,9,3,4,
4,1,6,1,5,6,5,2,9
</csv>
-------- END OF FILE-----------
I also have a perl program that has a map:
my %column_mapping = (
"name" => 'name',
"name1" => 'name_1',
"name2" => 'name_2',
"name3" => 'name_3',
"name4" => 'name_4',
"name5" => 'name_5',
"name6" => 'name_6',
"name7" => 'name_7',
"name9" => 'name_9',
)
My dynamic insert statement (assume I connected to database proper, and headers is my array of header names, such as test1, test2, ect)
my $sql = sprintf 'INSERT INTO tablename ( %s ) VALUES ( %s )',
join( ',', map { $column_mapping{$_} } #headers ),
join( ',', ('?') x scalar #headers );
my $sth = $dbh->prepare($sql);
Now for the problem I am actually having:
I need a way to only do an insert on the headers and for the values that are in the map.
In the data file given as an exmaple, there are several names that are not in the map, is there a way I can ignore them and the numbers associated with them in the csv section?
basically to make a subset csv, to turn it into:
name,name2,name4,name3,name6,name9,
1,2,1,8,2,8,
2,6,1,8,7,7,
1,4,1,8,9,3,
4,1,1,6,5,2,
so that my insert statment will only insert the ones in the map. The data file is always different, and are not in same order, and an unknown amount will be in the map.
Ideally a efficient way to do this, since this script will be going through thousands of files, and each files behind millions of lines of the csv with hundreds of columns.
It is just a text file being read though, not a csv, not sure if csv libraries can work in this scenario or not.
You would typically put the set of valid indices in a list and use array slices after that.
#valid = grep { defined($column_mapping{ $headers[$_] }) } 0 .. $#headers;
...
my $sql = sprintf 'INSERT INTO tablename ( %s ) VALUES ( %s )',
join( ',', map { $column_mapping{$_} } #headers[#valid] ),
join( ',', ('?') x scalar #valid);
my $sth = $dbh->prepare($sql);
...
my #row = split /,/, <INPUT>;
$sth->execute( #row[#valid] );
...
Because this is about four different questions in one, I'm going to take a higher level approach to the broad set of problems and leave the programming details to you (or you can ask new questions about the details).
I would get the data format changed as quickly as possible. Mixing CSV columns into an XML file is bizarre and inefficient, as I'm sure you're aware. Use a CSV file for bulk data. Use an XML file for complicated metadata.
Having the headers be an XML comment is worse, now you're parsing comments; comments are supposed to be ignored. If you must retain the mixed XML/CSV format put the headers into a proper XML tag. Otherwise what's the point of using XML?
Since you're going to be parsing a large file, use an XML SAX parser. Unlike a more traditional DOM parser which must parse the whole document before doing anything, a SAX parser will process it as it reads the file. This will save a lot of memory. I leave SAX processing as an exercise, start with XML::SAX::Intro.
Within the SAX parser, extract the data from the <csv> and use a CSV parser on that. Text::CSV_XS is a good choice. It is efficient and has solved all the problems of parsing CSV data you are likely to run into.
When you finally have it down to a Text::CSV_XS object, call getline_hr in a loop to get the rows as hashes, apply your mapping, and insert into your database. #mob's solution is fine, but I would go with SQL::Abstract to generate the SQL rather than doing it by hand. This will protect against both SQL injection attacks as well as more mundane things like the headers containing SQL meta characters and reserved words.
It's important to separate the processing of the parsed data from the parsing of the data. I'm quite sure that hideous data format will change, either for the worse or the better, and you don't want to tie the code to it.

Preserve new lines in postgres prepared statement via perl script

I need to insert data containing new lines into a text column. In doing a regular insert, the way I do that is with ... VALUES(E'Here comes a \\n new line) which produces the following when queried:
Here Comes a
new line
However, when using a prepared statement such as:
my $sth = $dbh->prepare("INSERT INTO foo_table (foo_id, foo_bar_id, foo_text) VALUES (?, ?, ?)");
my #data = (1, 9999, "E'Here comes a \\n new line'");
$sth->execute(#data);
the data that gets inserted is literally E'Here comes a \n new line' I have tried littering my value to insert with various \ to see if escaping the characters was the issue, but if it is, I haven't found the right combination yet. Thanks for any help you can offer.
EDIT2: turns out I just needed a single \ when using a prepared statement. Apparently, I overlooked the simplest solution when trying out all of the more complex escape sequences.
EDIT: The escaping mentioned in the answers below does not produce the desired result. It produces Here comes a \n new line when queried not
Here comes a
new line
like it does when using the E'...' syntax
You should just need to pass the value in. Let the driver deal with the escaping.
The E'Here comes a \\n new line' is only used when it's embedded in a SQL statement. In your case, just pass in
my #data = (1, 9999, "Here comes a \\n new line");
and the parameter binding mechanism will take of the escaping for you.

Perl: How to retrieve field names when doing $dbh->selectall_..?

$sth = $dbh->prepare($sql);
$sth->execute();
$sth->{NAME};
But how do you do that when:
$hr = $dbh->selectall_hashref($sql,'pk_id');
There's no $sth, so how do you get the $sth->{NAME}? $dbh->{NAME} doesn't exist.
When you're looking at a row, you can always use keys %$row to find out what columns it contains. They'll be exactly the same thing as NAME (unless you change FetchHashKeyName to NAME_lc or NAME_uc).
You can always prepare and execute the handle yourself, get the column names from it, and then pass the handle instead of the sql to selectall_hashref (e.g. if you want the column names but the statement may return no rows). Though you may as well call fetchall_hashref on the statement handle.

Escaping & in perl DB queries

I need to manipulate some records in a DB and write their values to another table. Some of these values have an '&' in the string, i.e. 'Me & You'. Short of finding all of these values and placing a \ before any &'s, how can insert these values into a table w/o oracle choking on the &?
Use placeholders. Instead of putting '$who' in your SQL statement, prepare with a ? there instead, and then either bind $who, or execute with $who as the appropriate argument.
my $sth = $dbh->prepare_cached('INSERT INTO FOO (WHO) VALUES (?)');
$sth->bind_param(1, $who);
my $rc = $sth->execute();
This is safer and faster than trying to do it yourself. (There is a "quote" method in DBI, but this is better than that.)
This is definitely a wheel you don't need to reinvent. If you are using DBI, don't escape the input; use placeholders.
Example:
my $string = "database 'text' with &special& %characters%";
my $sth = $dbh->prepare("UPDATE some_table SET some_column=?
WHERE some_other_column=42");
$sth->execute($string);
The DBD::Oracle module (and all the other DBD::xxxxx modules) have undergone extensive testing and real world use. Let it worry about how to get your text inserted into the database.

Why won't this ISQL command run through Perl's DBI?

A while back I was looking for a way to insert values into a text field through isql
and eventually found some load command that worked out for me.
It doesn't work when I try to execute it from Perl. I get a syntax error. I have tried two separate methods and both are not working so far.
I have the SQL statement variable print out at the end of each loop cycle so I know that the syntax is correct, but just not getting across correctly.
Here's the latest snip of code I was testing:
foreach(#files)
{
$STMT = <<EOF;
load from $_ insert into some_table
EOF
$sth = $db1->prepare($STMT);
$sth->execute;
}
#files is an array whose elements are a full path/location of a pipe-delimited text file (ex. /home/xx/xx/xx/something.txt)
The number of columns in the table match the number of fields in the text file and the type-checking is fine (I've loaded test files manually without fail)
The error I get back is:
DBD::Informix::db prepare failed: SQL: -201: A syntax error has occurred.
Any idea what might be causing this?
EDIT to RET's & Petr's answers
$STMT = "'LOAD FROM $_ INSERT INTO table'";
system("echo $STMT | isql $db")
I had to change it to this, because the die command would force an unnatural death and the statement had to be wrapped in single quotes.
Petr is exactly right, the LOAD statement is an ISQL or DB-Access extension, so you can't execute it through DBI. If you have a look at the manual, you'll see it is also invalid syntax for SPL, ESQL/C and so on.
It's not clear whether you have to use perl to execute the script, or perl is just a convenient way of generating the SQL.
If the former, and you want a pure-perl method, you have to prepare an INSERT statement (there's just one table involved by the look of it?), and slurp through the file, using split to break it up into columns and executing the prepared insert.
Otherwise, you can generate the SQL using perl and execute it through DB-Access, either directly with system or by wrapping both in either a shell script or DOS batch file.
System call version
foreach (#files) {
my $stmt = "LOAD FROM $_ INSERT INTO table;\n";
system("echo $stmt | dbaccess $database")
|| die "Statement $stmt failed: $!\n";
}
In a batch script version, you could write all the SQL into a single script, ie:
perl -e 'while(#ARGV){shift; print "LOAD FROM '$_' INSERT INTO table;\n"}' file1 [ file2 ... ] > loadfiles.sql
isql database loadfiles.sql
NB, the comment about quotes on the filename is only relevant if the filename contains spaces or metacharacters, the usual issue.
Also, one key difference in behaviour between isql and dbaccess is that when executed in this manner, dbaccess does not stop on error, but isql will. To make dbaccess stop processing on error, set DBACCNOIGN=1 in the environment.
Hope that's helpful.
This is because your query is not SQL query, it is an isql command that tells isql to parse the input file and generate INSERT statements.
If you think about it, the server can be on a completely different machine and has no idea what file are you talking about and how to access it.
So you basically have two options:
call isql and pipe the LOAD command to it - very ugly
parse the file yourself and generate the INSERT statements
Please note that the file Notes/load.unload is distributed with DBD::Informix and contains guidelines on how to handle UNLOAD operations using Perl, DBI and DBD::Informix. Somewhat to my chagrin, I see that it says "T.B.D." (more or less) for the LOAD section.
As other people have stated, the LOAD and UNLOAD statements are faked by various client-side tools to look like SQL statements, but the Informix server does not support them itself, mainly because of the issue with getting the file from a client machine (perhaps a PC) to the server machine (perhaps a Solaris machine).
To simulate the LOAD statement, you would need to analyze the INSERT INTO Table part. If it lists columns (INSERT INTO Table(Col03, Col05, Col09)), then you can expect three values in the load data file, and they go into those three columns. You would prepare a statement 'SELECT Col03, Col05, Col09 FROM Table' to get the types of the columns. Otherwise, you need to prepare a statement 'SELECT * FROM Table' to get the complete list of columns (and their types). Given the column names and the number of columns, you can create and prepare a suitable insert statement: 'INSERT INTO Table(Col03, Col05, Col09) VALUES(?,?,?)' or 'INSERT INTO Table VALUES(?,?,?,?,?,?,?,?,?)'. You could (arguably should) include column names in the second one.
With that ready, you now have parse the unloaded data. There is a document available in the SQLCMD program available from the IIUG Software Archive (which has been around a lot longer than Microsoft's upstart program of the same name). That describes the UNLOAD format in considerable detail. Perl has the ability to handle anything Informix uses - witness the UNLOAD information in the load.unload file distributed with DBD::Informix.
A quick bit of Googling showed that the syntax for load puts quote marks around the file name. What if you change your statement to be:
load from '$_' insert into some_table
Since your statement is not using place holders, you have to put the quotes in yourself, as opposed to using the DBI quoting functionality.