In this script I have problems with file-name-extensions:
if I use /home/mm/test_x it works, with file named /home/mm/test_x.csv it doesn't:
#!/usr/bin/env perl
use warnings; use strict;
use 5.012;
use DBI;
my $table_1 = '/home/mm/test_1.csv';
my $table_2 = '/home/mm/test_2.csv';
#$table_1 = '/home/mm/test_1';
#$table_2 = '/home/mm/test_2';
my $dbh = DBI->connect( "DBI:CSV:" );
$dbh->{RaiseError} = 1;
$table_1 = $dbh->quote_identifier( $table_1 );
$table_2 = $dbh->quote_identifier( $table_2 );
my $sth = $dbh->prepare( "SELECT a.id, a.name, b.city FROM $table_1 AS a NATURAL JOIN $table_2 AS b" );
$sth->execute;
$sth->dump_results;
$dbh->disconnect;
Output with file-name-extention:
DBD::CSV::st execute failed:
Execution ERROR: No such column '"/home/mm/test_1.csv".id' called from /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux/DBD/File.pm at 570.
Output without file-name-extension:
'1', 'Brown', 'Laramie'
'2', 'Smith', 'Watertown'
2 rows
Is this a bug?
cat test_1.csv
id,name
1,Brown
2,Smith
5,Green
cat test_2.csv
id,city
1,Laramie
2,Watertown
8,Springville
DBD::CSV provides a way to map the table names you use in your queries to filenames. The same mechanism is used to set up per-file attributes like line ending, field separator etc. look for 'csv_tables' in the DBD::CSV documentation.
#!/usr/bin/env perl
use warnings;
use strict;
use DBI;
my $dbh = DBI->connect("DBI:CSV:f_dir=/home/mm", { RaiseError => 1 });
$dbh->{csv_tables}->{table_1} = {
'file' => 'test_1.csv',
'eol' => "\n",
};
$dbh->{csv_tables}->{table_2} = {
'file' => 'test_2.csv',
'eol' => "\n",
};
my $sth = $dbh->prepare( "SELECT a.id, a.name, b.city FROM table_1 AS a NATURAL JOIN table_2 AS b" );
$sth->execute();
$sth->dump_results();
$dbh->disconnect();
In my case I had to specify a line ending character, because I created the CSV files in vi so they ended up with Unix line endings whereas DBD::CSV assumes DOS/Windows line-endings regardless of the platform the script is run on.
I looks like even this works:
#!/usr/bin/env perl
use warnings; use strict;
use 5.012;
use DBI;
my $dbh = DBI->connect("DBI:CSV:f_dir=/home/mm/Dokumente", undef, undef, { RaiseError => 1, });
my $table = 'new.csv';
$dbh->do( "DROP TABLE IF EXISTS $table" );
$dbh->do( "CREATE TABLE $table ( id INT, name CHAR(64), city CHAR(64) )" );
my $sth_new = $dbh->prepare( "INSERT INTO $table( id, name, city ) VALUES ( ?, ?, ? )" );
$dbh->{csv_tables}->{table_1} = { 'file' => '/tmp/test_1.csv', 'eol' => "\n", };
$dbh->{csv_tables}->{table_2} = { 'file' => '/tmp/test_2.csv', 'eol' => "\n", };
my $sth_old = $dbh->prepare( "SELECT a.id, a.name, b.city FROM table_1 AS a NATURAL JOIN table_2 AS b" );
$sth_old->execute();
while ( my $hash_ref = $sth_old->fetchrow_hashref() ) {
state $count = 1;
$sth_new->execute( $count++, $hash_ref->{'a.name'}, $hash_ref->{'b.city'} );
}
$dbh->disconnect();
I think you might want to take a look at the f_ext and f_dir attributes. You can then class your table names as "test_1" and "test_2" without the csv but the files used will be test_1.csv and test_2.csv. The problem with a dot in the table name is a dot is usually used for separating the schema from the table name (see f_schema).
Related
I am trying to write a script that will read the data from postgresql table and insert it to an oracle table, here is my script :
#!/usr/local/bin/perl
use strict;
use DBI;
use warnings FATAL => qw(all);
my $pgh = pgh(); # connect to postgres
my $ora = ora(); # connect to oracle
my #rows;
my $rows =[] ;
my $placeholders = join ", ", ("?") x #rows;
my $sth = $pgh->prepare('SELECT * FROM "Employees"');
$sth->execute();
while (#rows = $sth->fetchrow_array()) {
$ora->do("INSERT INTO employees VALUES($placeholders)");
}
#connect to postgres
sub pgh {
my $dsn = 'DBI:Pg:dbname=northwind;host=localhost';
my $user = 'postgres';
my $pwd = 'postgres';
my $pgh = DBI -> connect($dsn,$user,$pwd,{'RaiseError' => 1});
return $pgh;
}
#connect to oracle
sub ora {
my $dsn = 'dbi:Oracle:host=localhost;sid=orcl';
my $user = 'nwind';
my $pwd = 'nwind';
my $ora = DBI -> connect($dsn,$user,$pwd,{'RaiseError' => 1});
return $ora;
}
I am getting the following error :
DBD::Oracle::db do failed: ORA-00936: missing expression (DBD ERROR: error possibly near <*> indicator at char 29 in 'INSERT INTO employees VALUES(<*>)') [for Statement "INSERT INTO employees VALUES()"] at /usr/share/perlproj/cgi-bin/scripts/nwind_pg2ora.pl line 19.
Please help me to get my code correct.
Many thanks !!
Tonya.
See the documentation for DBD::Oracle you have to bind the parameter value for the BLOBs like :
use DBD::Oracle qw(:ora_types);
$sth->bind_param($idx, $value, { ora_type=>ORA_BLOB, ora_field=>'PHOTO' });
my #rows;
my $rows =[] ;
my $sth = $pgh->prepare('SELECT * FROM "Employees"');
$sth->execute();
while (#rows = $sth->fetchrow_array()) {
my $placeholders = join ", ", ("?") x #rows;
$ora->do("INSERT INTO employees VALUES($placeholders)");
}
You're joining an empty #rows to create an empty $placeholders.
perform the join inside the while loop, before the do().
The following lazily creates a statement handle for inserting into the Oracle database based off the number of columns in the returned records.
It then inserts those column values into the database, so obviously we're assuming the table structures are identical:
use strict;
use DBI;
use warnings FATAL => qw(all);
my $pgh = pgh(); # connect to postgres
my $ora = ora(); # connect to oracle
my $sth = $pgh->prepare('SELECT * FROM "Employees"');
$sth->execute();
my $sth_insert;
while (my #cols = $sth->fetchrow_array()) {
$sth_insert ||= do {
my $placeholders = join ", ", ("?") x #cols;
$ora->prepare("INSERT INTO employees VALUES ($placeholders)");
};
$sth_insert->execute(#cols);
}
When I fetch the data this way is it possible then to access the column names and the column types or do I need an explicit prepare to reach this?
use DBI;
my $dbh = DBI->connect( ... );
my $select = "...";
my #arguments = ( ... );
my $ref = $dbh->selectall_arrayref( $select, {}, #arguments, );
Update:
With prepare I would do it this way:
my $sth = $dbh->prepare( $select );
$sth->execute( #arguments );
my $col_names = $sth->{NAME};
my $col_types = $sth->{TYPE};
my $ref = $sth->fetchall_arrayref;
unshift #$ref, $col_names;
The best solution is to use prepare to get a statement handle, as you describe in the second part of your question. If you use selectall_hashref or selectall_arrayref, you don't get a statement handle, and have to query the column type information yourself via $dbh->column_info (docs):
my $sth = $dbh->column_info('','',$table,$column); # or $column='' for all
my $info = $sth->fetchall_arrayref({});
use Data::Dumper; print Dumper($info);
(specifically, the COLUMN_NAME and TYPE_NAME attributes).
However, this introduces a race condition if the table changes schema between the two queries.
Also, you may use selectall_arrayref with the Slice parameter to fetch all the columns into a hash ref, it needs no prepared statement and will return an array ref of the result set rows, with each rows columns the key's to a hash and the values are the column values. ie:
my $result = $dbh->selectall_arrayref( qq{
SELECT * FROM table WHERE condition = value
}, { Slice => {} }) or die "Error: ".$dbh->errstr;
$result = [
[0] = { column1 => 'column1Value', column2 => 'column2Value', etc...},
[1] = { column1 => 'column1Value', column2 => 'column2Value', etc...},
];
Making it easy to iterate over results.. ie:
for my $row ( #$results ){
print "$row->{column1Value}, $row->{column2Value}\n";
}
You can also specify which columns to extract but it's pretty useless due to the fact it's more efficient to do that in your SQL query syntax.
{ Slice => { column1Name => 1, column2Name => 1 } }
That would only return the values for column1Name and column2Name just like saying in your SQL:
SELECT column1Name, column2Name FROM table...
#!/usr/bin/env perl
use warnings;
use 5.012;
use DBI;
my $dsn = "DBI:Proxy:hostname=horst;port=2000;dsn=DBI:ODBC:db1.mdb";
my $dbh = DBI->connect( $dsn, undef, undef ) or die $DBI::errstr;
$dbh->{RaiseError} = 1;
$dbh->{PrintError} = 0;
my $my_table = 'my_table';
eval{ $dbh->do( "DROP TABLE $my_table" ) };
$dbh->do( "CREATE TABLE $my_table" );
my $ref = [ qw( 1 2 ) ];
for my $col ( 'col_1', 'col_2', 'col_3' ) {
my $add = "$col INT";
$dbh->do( "ALTER TABLE $my_table ADD $add" );
my $sql = "INSERT INTO $my_table ( $col ) VALUES( ? )";
my $sth = $dbh->prepare( $sql );
$sth->bind_param_array( 1, $ref );
$sth->execute_array( { ArrayTupleStatus => \my #tuple_status } );
}
my $sth = $dbh->prepare( "SELECT * FROM $my_table" );
$sth->execute();
$sth->dump_results();
$dbh->disconnect;
This script outputs:
'1', undef, undef
'2', undef, undef
undef, '1', undef
undef, '2', undef
undef, undef, '1'
undef, undef, '2'
6 rows
How do I have to change this script to get this output:
'1', '1', '1'
'2', '2', '2'
2 rows
Do this in two steps:
Create the 3 columns
insert data in them
You prepare a SQL statement 3 times and execute twice for values 1,2 so you get 6 rows. I don't know how to answer your question of how do you change it to get 2 rows since we've no idea what you are trying to achieve. Without knowing what you are trying to achieve I'd be guessing but the following results in the output you wanted:
my $ref = [ qw( 1 2 ) ];
for my $col ( 'col_1', 'col_2', 'col_3' ) {
my $add = "$col INT";
$dbh->do( "ALTER TABLE $my_table ADD $add" );
}
$sql = "INSERT INTO $my_table ( col_1, col_2, col_3 ) VALUES( ?,?,? )";
my $sth = $dbh->prepare( $sql );
$sth->bind_param_array( 1, $ref );
$sth->bind_param_array( 2, $ref );
$sth->bind_param_array( 3, $ref );
$sth->execute_array( { ArrayTupleStatus => \my #tuple_status } );
I need to insert values from a hash into a database. Following is the code template I have to insert values in table1 column key and value:
use DBI;
use strict;
%hash; #assuming it already contains desired values
my $dbh = DBI->connect(
"dbi:Sybase:server=$Srv;database=$Db",
"$user", "$passwd"
) or die sprintf 'could not connect to database %s', DBI->errstr;
my $query= "Insert INTO table1(key, values) VALUES (?,?) ";
my $sth = $dbh->prepare($query)
or die "could not prepare statement\n", $dbh->errstr;
$sth-> execute or die "could not execute", $sth->errstr;
I know how to insert values using array i.e use execute_array(), but do not know how to insert values present in %hash in table1.
Any suggestions?
The following uses the execute_array function as mentioned in your question. I tested it.
my $dbh = DBI->connect("DBI:mysql:database=$DB;host=$host;port=$port", $user, $password);
my %hash = (
1 => 'A',
2 => 'B',
0 => 'C',
);
my #keys = keys %hash;
my #values = values %hash;
my $sth = $dbh->prepare("INSERT INTO table1(id, value) VALUES (?,?);");
$sth->execute_array({},\#keys, \#values);
(Sorry, I don't have a Sybase database to work with, or I'd use it as an example.)
Try SQL::Abstract
use DBI;
use SQL::Abstract;
use strict;
%hash; #assuming it already contains desired values
my $dbh = DBI->connect(
"dbi:Sybase:server=$Srv;database=$Db",
"$user", "$passwd"
) or die sprintf 'could not connect to database %s', DBI->errstr;
my ($query, #bind) = $sql->insert("tableName", \%hash);
my $sth = $dbh->prepare($query)
or die "could not prepare statement\n", $dbh->errstr;
$sth-> execute (#bind) or die "could not execute", $sth->errstr;
Here's a mostly easy way to build the query. I will typically do something like this because I haven't found another workaround yet.
use strict;
use DBI;
my $dbh = Custom::Module::Make::DBH->connect('$db');
my %hash = (
apple => 'red',
grape => 'purple',
banana => 'yellow',
);
my $keystr = (join ",\n ", (keys %hash));
my $valstr = join ', ', (split(/ /, "? " x (scalar(values %hash))));
my #values = values %hash;
my $query = qq`
INSERT INTO table1 (
$keystr
)
VALUES (
$valstr
)
`;
my $sth = $dbh->prepare($query)
or die "Can't prepare insert: ".$dbh->errstr()."\n";
$sth->execute(#values)
or die "Can't execute insert: ".$dbh->errstr()."\n";
But it's possible I also didn't understand the question correctly :P
Maybe you could try using
for my $key (keys %hash) {
$sth->execute($key, $hash{$key}) or die $sth->errstr;
}
Is this what you're trying to achieve?
If I understand the manual correctly ("Execute the prepared statement once for each parameter tuple (group of values) [...] via a reference passed ...") it should also be possible to simply to
($tuples, $rows) = $sth->execute_array(\%hash) or die $sth->errstr;
I need to insert values in database using Perl's DBI module. I have parsed a file to obtain these values and hence these values are present in an arrays, say #array1, #array2, #array3. I know how to insert one value at a time but not from an arrays.
I know insert one value at a time:
$dbh = DBI->connect("dbi:Sybase:server=$Srv;database=$Db", "$user", "$passwd") or die "could not connect to database";
$query= "INSERT INTO table1 (id, name, address) VALUES (DEFAULT, tom, Park_Road)";
$sth = $dbh->prepare($query) or die "could not prepare statement\n";
$sth-> execute or die "could not execute statement\n $command\n";
I am not sure if I have array1 containing ids, array2 containing names, and array3 containing address, how would I insert values.
Since you have parallel arrays, you could take advantange of execute_array:
my $sth = $dbh->prepare('INSERT INTO table1 (id, name, address) VALUES (?, ?, ?)');
my $num_tuples_executed = $sth->execute_array(
{ ArrayTupleStatus => \my #tuple_status },
\#ids,
\#names,
\#addresses,
);
Please note that this is a truncated (and slightly modified) example from the documentation. You'll definitely want to check out the rest of it if you decide to use this function.
Use placeholders.
Update: I just realized you have parallel arrays. That is really not a good way of working with data items that go together. With that caveat, you can use List::MoreUtils::each_array:
#!/usr/bin/perl
use strict; use warnings;
use DBI;
use List::MoreUtils qw( each_array );
my $dbh = DBI->connect(
"dbi:Sybase:server=$Srv;database=$Db",
$user, $passwd,
) or die sprintf 'Could not connect to database: %s', DBI->errstr;
my $sth = $dbh->prepare(
'INSERT INTO table1 (id, name, address) VALUES (?, ?, ?)'
) or die sprintf 'Could not prepare statement: %s', $dbh->errstr;
my #ids = qw( a b c);
my #names = qw( d e f );
my #addresses = qw( g h i);
my $it = each_array(#ids, #names, #address);
while ( my #data = $it->() ) {
$sth->execute( #data )
or die sprintf 'Could not execute statement: %s', $sth->errstr;
}
$dbh->commit
or die sprintf 'Could not commit updates: %s', $dbh->errstr;
$dbh->disconnect;
Note that the code is not tested.
You might also want to read the FAQ: What's wrong with always quoting "$vars"?.
Further, given that the only way you are handling error is by dying, you might want to consider specifying { RaiseError => 1 } in the connect call.
How could you not be sure what your arrays contain? Anyway the approach would be the iterate through one array and assuming the other arrays have corresponding values put those into the insert statement
Another way would be to use a hash as an intermediate storage area. IE:
my $hash = {};
foreach(#array1) {
$hash->{id} = $array1[$_];
$hash->{name} = $array2[$_];
$hash->{address} = $array3[$_];
}
foreach( keys %$hash ) {
$sql = "insert into table values(?,?,?)";
$sth = $dbh->prepare($sql) or die;
$sth->execute($hash->{id}, $hash->{name}, $hash->{address}) or die;
}
Though again this depends on the three arrays being synced up. However you could modify this to do value modifications or checks or greps in the other arrays within the first loop through array1 (ie: if your values in array2 and array3 are maybe stored as "NN-name" and "NN-address" where NN is the id from the first array and you need to find the corresponding values and remove the NN- with a s// regex). Depends on how your data is structured though.
Another note is to check out Class::DBI and see if it might provide a nicer and more object oriented way of getting your data in.