I am writing a Perl script that is using the DBI module and is connecting to a Sybase DB. I am calling a stored procedure (one that I don't have access to so I cannot post sample code) and when I get data back I get an error that reads "error_handler: Data-conversion resulted in overflow". I still get data back and after doing some intensive research it seems that some data types in the columns (such as BigInt, nvarchar, etc) are the culprits. Now the question is, how can I fix this? Can this be fixed on the client side or can it only be fixed on the server side?
my $dbh = DBI->connect("DBI:Sybase:server=$server", $username, $password, {PrintError => 0}) or die;
$dbh->do("use $database") or die;
my $sql = &getQuery;
my $sth = $dbh->prepare($sql) or die;
$sth->execute() or die;
while ($rowRef = $sth->fetchrow_arrayref) #Error seems to occur here
{
#Parse through each row
}
Part of the FreeTDS 0.82 log that explains the problem:
_ct_bind_data(): column 7 is type 38 and has length 8
_ct_get_server_type(0)
_ct_get_client_type(type 38, user 0, size 8)
cs_convert(0x18dfed40, 0x7fff73216050, 0x18e44250, 0x7fff73215fa0, 0x18e387c0, 0x18e45a64)
_ct_get_server_type(30)
_ct_get_server_type(0)
converting type 127 (8 bytes) to type = 47 (9 bytes)
cs_convert() calling tds_convert
cs_convert() tds_convert returned 10
cs_prretcode(0)
cs_convert() returning CS_FAIL
cs_convert-result = 1
The problem is on the FreeTDS side. I've had the same problem before and successfully fixed it by converting the returned fields to varchar in the select statement.
Given you don't have access to modify the original query, you can do some regex search and replace on the returned $sql variable in your code. In particular, if the original query has a part that looks like
SELECT field1, field2, field3 FROM ...
After you retrieve the query statement, you may run
my $new_sql;
if ($sql =~ /SELECT\s+(.*)\s+FROM/i) { # match selected field string
my $field_str = $1;
my #fields = split ",", $field_str; # parse individual fields
map s/\s//g, #fields; # get rid of spaces
my $new_str = join ", ", (map {sprintf "convert(varchar, $_)"} #fields); # construct new query string
my $quoted_field_str = quotemeta($field_str); # prepare regex replacement string
$new_sql = $sql;
$new_sql =~ s/$quoted_field_str/$new_str/i # actual replacement
}
print $new_sql;
Of course, if your original statement is more complex, you should print it out and check how to modify it with a generic replacement bearing the same spirit. Alternatively, you can ask your DBA (or whoever has access to the stored procedure) to modify the actual query directly.
Hope this helps.
Related
# I am saving output to an array and the array looks like this:-
60=20130624-09:45:02.046|21=1|38=565|52=20130624-09:45:02.046|35=D|10=085|40=1|9=205|100=MBTX|49=11342|553=2453|34=388|1=30532|43=Y|55=4323|54=1|56=MBT|11=584|59=0|114=Y|8=FIX.4.4|
# Then i converted this array to scalar variable like this:-
$scal=join('' , #arr);
# And now I am trying to save this into db:-
my $st = qq(INSERT INTO demo (fix)
VALUES ($scal));
my $r = $dbh->do($st) or die $DBI::errstr;
#And my table schema is:-
CREATE TABLE demo (fix varchar);
And I keep getting errors :- DBD::SQLite::db do failed: near ":45": syntax error at pdb.pl line 92, <STDIN> line 1.
DBD::SQLite::db do failed: near ":45": syntax error at pdb.pl line 92, <STDIN> line 1.
Any help will be appreicated
The way you denote your array is a bit weird. Usually you would write it as
my #arr = ( '60=20130624-09:45:02.046',
'21=1',
'38=565',
... );
or whatever your actual content is. But this is not the problem here because you flatten it to the string $scal anyway.
One way to insert this string into your DB is to put ticks (') around it:
my $st = qq(INSERT INTO demo (fix) VALUES ('$scal'));
my $r = $dbh->do($st) or die $DBI::errstr;
But this is bad because it's vulnerable to SQL injection (http://imgs.xkcd.com/comics/exploits_of_a_mom.png).
Consider the case your string is foo'); delete from demo; --. The final result would then be
INSERT INTO demo (fix) VALUES ('foo'); delete from demo; --')
The second reason why this is bad: Your string could contain ticks ($scal="foo's bar") and that also would mess up the resulting INSERT statement:
INSERT INTO demo (fix) VALUES ('foo's bar');
Conclusion: it's always better to use parameterized queries:
my $st = 'INSERT INTO demo (fix) VALUES (?)';
my $r = $dbh->do($st, undef, $scal) or die $DBI::errstr;
The undef is for additional SQL options (I've rarely seen anything different from undef here). The following parameters are replaced for the ?s in the statement. The DB driver does all the quoting for you. The more ? you use, the more parameters you must supply to do():
my $st = 'INSERT INTO sample_tbl (col1, col2, col3) VALUES (?, ?, ?)';
my $r = $dbh->do($st, undef, 'foo', 42, $scal) or die $DBI::errstr;
I know I can do this with interpolation. Can I do it using placeholders?
I am getting this error:
DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "{"22,23"}" at ./testPlaceHolders-SO.pl line 20.
For this script:
#!/usr/bin/perl -w
use strict;
use DBI;
# Connect to database.
my $dbh = DBI->connect("dbi:Pg:dbname=somedb;host=localhost;port=5432", "somedb", "somedb");
my $typeStr = "22,23";
my #sqlParms = [ $typeStr ];
my $sqlStr = << "__SQL_END";
SELECT id
FROM states
WHERE typeId in (?)
ORDER BY id;
__SQL_END
my $query = $dbh->prepare($sqlStr);
$query->execute(#sqlParms);
my $id;
$query->bind_columns(\$id);
# Process rows
while ($query->fetch())
{
print "Id: $id\n";
}
Is there a way around it besides interpolation?
DBD::PG has support for PostgreSQL arrays, so you can simply write a query like this:
WHERE typeid = ANY( ARRAY[1,2,3] )
or, with a parameter...
WHERE typeid = ANY(?)
Then just use the array support
my #targets = (1,2,3);
# ...
$query->execute(\#targets);
Posting comment as answer, as requested.
Generate your own placeholder string. Like so:
my #nums = (22,23);
my $placeholder = join ",", ("?") x #nums;
$query->execute(#nums);
Yes. You must use placeholders for each value, such as IN (?, ?, ?). You can however generate the correct number of question marks using something like this (untested):
my #values = (22, 23, ...);
# will produce "?, ?, ..."
my $in = join ", ", ("?") x #values;
my $sqlStr = "SELECT id FROM states WHERE typeId in ($in) ORDER BY id;";
my $query = $dbh->prepare($sqlStr);
$query->execute(#values);
Note that if you use an ORM such as DBIx::Class instead, this sort of ugly hack gets abstracted away.
You have to build the SQL statement with the correct number of question marks and then set the parameter values. There is no way to bind a list to a single question mark.
I remember having problem with DBI method selectrow_array. When i wasn't tidy enough i got back from it not the value of the column i asked, but count of columns (or something unwanted, i can't recall exactly). Now i try to refactor some code and i want to make sure in every possible place, that i get back only expected value. So i try to avoid surprises and find out which the bad behaviour was. From DBI docs i read that this may be really be problematic situation:
If called in a scalar context for a statement handle that has more
than one column, it is undefined whether the driver will return the
value of the first column or the last. So don't do that. Also, in a
scalar context, an "undef" is returned if there are no more rows or if
an error occurred. That "undef" can't be distinguished from an "undef"
returned because the first field value was NULL. For these reasons
you should exercise some caution if you use "selectrow_array" in a
scalar context, or just don't do that.
Still i can't force selectrow_array to return anything but value of the col1 (that's it what i am expecting)
my $query = 'SELECT col1, col2, col3 FROM table WHERE id = 112233';
my ( $c ) = ( $dbh->selectrow_array( $query ) );
my $x = ask_from_db();
my $y = $dbh->selectrow_array( $query );
my $z = ( $dbh->selectrow_array( $query ) );
my #A = $dbh->selectrow_array( $query );
say "C: $c"; # C: col1
say "X: $x"; # X: col1
say "Y: $y"; # Y: col1
say "Z: $z"; # Z: col1
say "A: #A"; # A: col1 col2 col3
sub ask_from_db {
return $dbh->selectrow_array( $query );
}
Every way i ask above, gives me fine result. How should i run the query to get wrong result?
wrong result != col1 value
The difference in outcome will be based on the implementation of the driver.
wantarray ? #row : $row[0]
vs
wantarray ? #row : $row[-1]
You'd use to use a different driver to get a different outcome. That said, I imagine you'll have a hard time finding a driver that doesn't return the first.
If you want to be sure to get the first, use:
( $dbh->selectrow_array( $query ) )[0]
What the documentation means by "it is undefined whether the driver will return the value of the first column or the last" is that the column returned is defined by the database driver and not DBI.
So the Postgres driver may decide to always return the first column whereas the mysql driver may always return the last column, or the column returned might depend on the query.
So don't call selectrow_array is scalar context - always call it in list context:
my #row = $sth->selectrow_array($query)
and you'll avoid all of the issues that the documentation mentions.
Have some Perl code which is using the DBI module - (the code is at work, I can post it in the morning if needed) - but mainly trying to get a sense of what DBI needs to do an update to a row -- and get either errors back, or confirmation that the UPDATE was executed.
(Below is just a basic example, feel free to give your own example and sample DDL if you want... just want some code that I know works. I've run my code via the Perl PtkDB debugger, and can "see" the SQL it generating and executing -- even paste in in the MySQL consol and execute it... but it's doing nothing in the Perl, even thought the select statements are working. Mainly just want a better idea of how DBI is handling UPDATE to MySQL, and if there's any built in feature in DBI that would make debugging this more simple. Thanks!)
So, please supply one full Perl script that:
Sets the connection (MySQL)
SELECT row two based on ID and get the first and last name
Lowercase the names
UPDATE the table
disconnect
Sample TABLE
<COL01>Id <COL02>FirstName <COL03>LastName
<ROW01-COL01>1 <ROW01-COL02>John <ROW01-COL03>Smith
<ROW02-COL01>2 <ROW02-COL02>Jane <ROW02-COL03>Doe
UPDATE (1): Code in question is below. The ONLY thing I've changed is remove code not related to the issue and the config info (eg database name, user, password, etc.) and made the value production for the variables super simple. This code was created by someone else and in a legacy code base.
use strict;
use warnings;
use DBI;
sub dbOpen {
my $dsn;
my $dbh;
$dsn = "DBI:mysql:database=databasename;host=localhost;port=3306";
$dbh = DBI->connect( $dsn, "root", "password" ) ||
print STDERR "FATAL: Could not connect to database.\n$DBI::errstr\n";
$dbh->{ AutoCommit } = 0;
return($dbh);
} # END sub dbOpen
my $Data;
$Data = &dbOpen();
my ($sql,$rs,$sql_update_result);
my $column2,
my $column3;
my $id;
$column2 = 2,
$column3 = 3;
$id = 1;
$sql = "UPDATE table SET column1 = NULL, column2 = ".$column2.", column3 = ".$column3." WHERE id = ".$id.";";
$rs = $Data->prepare( $sql );
$rs->execute() || &die_clean("Couldn't execute\n$sql\n".$Data->errstr."\n" );
($sql_update_result) = $rs->fetchrow;
$Data->disconnect();
DDL for MySQL -- if needed, just comment and I'll post one.
UPDATE (2):
Final found one complete example, though it's only for a select statement and not even inserting any VARs into the SQL: http://search.cpan.org/~timb/DBI/DBI.pm#Simple_Examples
Almost copy and paste from DBI Synopsis:
use DBI;
$dbh = DBI->connect($data_source, $username, $auth, \%attr);
$statement = "UPDATE some_table SET som_col = ? WHERE id = ?";
$rv = $dbh->do($statement, undef, $som_val, $id);
$DBI::err && die $DBI::errstr;
$rc = $dbh->disconnect;
I prefer to use do when updating or deleting since these operations doesn't return any row.
So, in order to have a little debug, i would modify your code like this:
my $sql = "UPDATE table SET column1=NULL, column2=$column2, column3=$column3 WHERE id=$id";
print STDERR "SQL: $sql\n"
my $numrows = $Data->do($sql);
if (not defined $numrows) {
print STDERR "ERROR: $DBI::errstr";
} else {
print STDERR "INFO: $numrows rows updated";
}
You can measure query response times from within your perl code, but since it is a database thing, i recommend you using any Mysql specialized tool (i don't use MySQL, sorry).
Have you considered something a bit higher level - like DBIx::Class?
You don't need to return the values, lowercase them in Perl, then update the rows. Just do that in one SQL statement:
my $sql = "UPDATE table SET column2=lower(column2) WHERE id = ?";
$sth = $dbh->prepare($sql);
foreach my $id (#ids) {
$sth->execute($id);
}
You also want to use placeholders to prevent Bobby Tables from visiting.
I am writing small snippets in Perl and DBI (SQLite yay!)
I would like to log some specific queries to text files having the same filename as that of the table name(s) on which the query is run.
Here is the code I use to dump results to a text file :
sub dumpResultsToFile {
my ( $query ) = #_;
# Prepare and execute the query
my $sth = $dbh->prepare( $query );
$sth->execute();
# Open the output file
open FILE, ">results.txt" or die "Can't open results output file: $!";
# Dump the formatted results to the file
$sth->dump_results( 80, "\n", ", ", \*FILE );
# Close the output file
close FILE or die "Error closing result file: $!\n";
}
Here is how I can call this :
dumpResultsToFile ( <<" END_SQL" );
SELECT TADA.fileName, TADA.labelName
FROM TADA
END_SQL
What I effectively want is, instead of stuff going to "results.txt" ( that is hardcoded above ), it should now go to "TADA.txt".
Had this been a join between tables "HAI" and "LOL", then the resultset should be written to "HAI.LOL.txt"
Is what I am saying even possible using some magic in DBI?
I would rather do without parsing the SQL query for tables, but if there is a widely used and debugged function to grab source table names in a SQL query, that would work for me too.
What I want is just to have a filename
that gives some hint as to what query
output it holds. Seggregating based on
table name seems a nice way for now.
Probably not. Your SQL generation code takes the wrong approach. You are hiding too much information from your program. At some point, your program knows which table to select from. Instead of throwing that information away and embedding it inside an opaque SQL command, you should keep it around. Then your logger function doesn't have to guess where the log data should go; it knows.
Maybe this is clearer with some code. Your code looks like:
sub make_query {
my ($table, $columns, $conditions) = #_;
return "SELECT $columns FROM $table WHERE $conditions";
}
sub run_query {
my ($query) = #_;
$dbh->prepare($query);
...
}
run_query( make_query( 'foo', '*', '1=1' ) );
This doesn't let you do what you want to do. So you should structure
your program to do something like:
sub make_query {
my ($table, $columns, $conditions) = #_;
return +{
query => "SELECT $columns FROM $table WHERE $conditions",
table => $table,
} # an object might not be a bad idea
}
sub run_query {
my ($query) = #_;
$dbh->prepare($query->{query});
log_to_file( $query->{table}.'.log', ... );
...
}
run_query( make_query( 'foo', '*', '1=1' ) );
The API is the same, but now you have the information you need to log
the way you want.
Also, consider SQL::Abstract for dynamic SQL generation. My code
above is just an example.
Edit: OK, so you say you're using SQLite. It has an EXPLAIN command
which you could parse the output of:
sqlite> explain select * from test;
0|Trace|0|0|0|explain select * from test;|00|
1|Goto|0|11|0||00|
2|SetNumColumns|0|2|0||00|
3|OpenRead|0|2|0||00|
4|Rewind|0|9|0||00|
5|Column|0|0|1||00|
6|Column|0|1|2||00|
7|ResultRow|1|2|0||00|
8|Next|0|5|0||00|
9|Close|0|0|0||00|
10|Halt|0|0|0||00|
11|Transaction|0|0|0||00|
12|VerifyCookie|0|1|0||00|
13|TableLock|0|2|0|test|00|
14|Goto|0|2|0||00|
Looks like TableLock is what you would want to look for. YMMV, this
is a bad idea.
In general, in SQL, you cannot reliably deduce table names from result set, both for theoretical reasons (the result set may only consist of computed columns) and practical (the result set never includes table names - only column names - in its data).
So the only way to figure out the tables used is to stored them with (or deduce them from) the original query.
I've heard good things about the parsing ability of SQL::Statement but never used it before now myself.
use SQL::Statement;
use strict;
use warnings;
my $sql = <<" END_SQL";
SELECT TADA.fileName, TADA.labelName
FROM TADA
END_SQL
my $parser = SQL::Parser->new();
$parser->{RaiseError} = 1;
$parser->{PrintError} = 0;
my $stmt = eval { SQL::Statement->new($sql, $parser) }
or die "parse error: $#";
print join',',map{$_->name}$stmt->tables;