Perl execute sql, print output to a file AND write to screen - perl

I call a SQL file thru my perl script, which writes the output to a log file, as:
system("sqlplus -s schema/pwd\#dbname \#$sql_file > $log_file");
However, I would like to have the output written to the screen as well. Is there a way to do this (other than re-executing the command sans writing to the log file)?

You can capture the results yourself and send them to both targets.
my $output = `sqlplus -s schema/pwd\#dbname \#$sql_file`;
print $output;
open( my $file, '>', $log_file ) or die $!;
print {$file} $output;
close $file;

You can effectively tee the output of the command, and save some memory, by reading its STDOUT using a pipe:
open(my $cmdfh, "sqlplus -s schema/pwd\#dbname \#$sql_file |") or die $!;
open(my $logfh, '>', $log_file ) or die $!;
while (<$cmdfh>) {
print;
print {$logfh} $_;
}
close $logfh;
close $cmdfh;

Related

Perl-Copying file from one location to other but content not copying

I am writing a script in perl where I am creating a file and getting input from user for file but when I am copying that file to other location the file is copying but it is empty only. My code is
# !/usr/bin/perl -w
for($i = 1;$i<5;$i++)
{
open(file1,"</u/man/fr$i.txt");
print "Enter text for file $i";
$txt = <STDIN>;
print file1 $txt;
open(file2,">/u/man/result/fr$i.txt");
while(<file1>)
{
print file2 $_;
}
close(file1);
close(file2);
}
fr1 to fr4 are creating but these are empty. like when I run my code it is asking for input i provide the input and code run without error but still the files are empty. Please help.
in line number 4 I changed < to > also as I thought for creating new file it might need that but still it is not working
You need to close the filehandle that was written to in order to be able to read from that file.
use warnings;
use strict;
use feature 'say';
for my $i (1..4)
{
my $file = "file_$i.txt";
open my $fh, '>', $file or die "Can't open $file: $!";
say $fh "Written to $file";
# Opening the same filehandle first *closes* it if already open
open $fh, '<', $file or die "Can't open $file: $!";
my $copy = "copy_$i.txt";
open my $fh_cp, '>', $copy or die "Can't open $copy: $!";
while (<$fh>) {
print $fh_cp $_;
}
close $fh_cp; # in case of early errors in later iterations
close $fh;
}
This creates the four files, file_1.txt etc, and their copies, copy_1.txt etc.
Please note the compulsory checking whether open worked.
You can't write to a filehandle that's not open for writing. You can't read from a filehandle that's not open for reading. Never ignore the return value of open.
# !/usr/bin/perl
use warnings; # Be warned about mistakes.
use strict; # Prohibit stupid things.
for my $i (1 .. 4) { # lexical variable, range
open my $FH1, '>', "/u/man/fr$i.txt" # 3 argument open, lexical filehandle, open for writing
or die "$i: $!"; # Checking the return value of open
print "Enter text for file $i: ";
my $txt = <STDIN>;
print {$FH1} $txt;
open my $FH2, '<', "/u/man/fr$i.txt" # Reopen for reading.
or die "$i: $!";
open my $FH3, '>', "/u/man/result/fr$i.txt" or die "$i: $!";
while (<$FH2>) {
print {$FH3} $_;
}
close $FH3;
}
I opened the file in write mode using filehandler1 Then i again opened the file in read mode using same filehandler1 then I opened filehandler2 for destiantion So it is working fine for me then.
system("cp myfile1.txt /somedir/myfile2.txt")
`cp myfile1.txt /somedir/myfile2.txt`

How to call shell from perl script

Perl script reads url from config file. In config file data stored as URL=http://example.com.
How can I get site name only. I've tried
open(my $fh, "cut -d= -f2 'webreader.conf'");
But it doesn't work.
Please, help!
You have to indicate with reading pipe -| that what follows is command which gets forked,
open(my $fh, "-|", "cut -d= -f2 'webreader.conf'") or die $!;
print <$fh>; # print output from command
Better approach would be to read file directly by perl,
open( my $fh, "<", "webreader.conf" ) or die $!;
while (<$fh>) {
chomp;
my #F = split /=/;
print #F > 1 ? "$F[1]\n" : "$_\n";
}
Maybe something like this?
$ cat urls.txt
URL=http://example.com
URL=http://example2.com
URL=http://exampleXXX.com
$ ./urls.pl
http://example.com
http://example2.com
http://exampleXXX.com
$ cat urls.pl
#!/usr/bin/perl
$file='urls.txt';
open(X, $file) or die("Could not open file.");
while (<X>) {
chomp;
s/URL=//g;
print "$_\n";
}
close (X);

Perl script for batch file processing

I have a relatively simple question for you experts. I have 300 files in a directory that I want to process with my perl script (shown below). I was wondering if there is a way to use a variable and process in a batch of files in perl. I have a file containing a list of file name if this helps.
Your feedback will be appreciated.
====================================
#!/usr/bin/perl
use strict;
use warnings;
open (FILE1, "001.txt") or die ("Can't open file $!");
while(<FILE1>){
my $line = $_;
chomp $line;
if ( $line =~ m/^chr/ ) {
open OUT, '>>', '001_tmp.txt';
print OUT "$line\n";
}
}
close(OUT);
close(FILE1);
======================================
Clarification:
Basically I want the perl script that is equivalent to the following shell script where I can accommodate all files using the variable.
#!/bin/bash
if [[ $# != 1 ]]
then
echo "Usage: error <input>"
else
echo $# $1
export input=$1
grep "^chr" $1 > ${input}_tmp.vcf
So you want your while loop to read through each file in some given directory..
I would do something like this:
Use opendir and readdir so you can get the file names to operate on.
I would also look at grep to filter out the files you don't care about, in my example I filter out directories...
opendir(my $dh, $dir) or die "$dir: $!";
my #files = grep { !-d $_ } readdir $dh;
closedir $dh;
Now you will have a list of files to do work on...
for my $file (#files) {
open my $fh, "<", $file or die "$!";
while( my $line = <$fh> ) {
#TODO: stuff
}
close $fh;
}
Edit: Your tags indicated batch-file, meaning Windows batch file. If that's not what you mean, disregard this. :-)
Perhaps something like this:
From a batch file:
for /f %%x in (listoffilenames.txt) do (
perl myperlscript.pl %%x
)
And then your Perl script can be modified like this:
#!/usr/bin/perl
use strict;
use warnings;
# You may want to add a little more error handling
# around getting the filename, etc.
my $filename = shift or die "No filename specified.";
open (FILE1, "<", $filename) or die ("Can't open file $!");
while(<FILE1>){
my $line = $_;
chomp $line;
if ( $line =~ m/^chr/ ) {
open OUT, '>>', "temp-$filename";
print OUT "$line\n";
}
}
close(OUT);
close(FILE1);

Perl reading and writing in files

Alright, so I'm back with another question. I know in Python there is a way to read in a file without specifying which file it will be, until you are in the command prompt. So basically you can set the script up so that you can read in any file you want and don't have to go back and change the coding every time. Is there a way to do this in Perl? If so, can you write files like that too? Thanks.
This is what I have:
open (LOGFILE, "UNSUCCESSFULOUTPUT.txt") or die "Can't find file";
open FILE, ">", "output.txt" or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
This is what I have nome:
#!/usr/local/bin/perl
my $argument1 = $ARGV[0];
open (LOGFILE, "<$argument1") or die "Can't find file";
open FILE, ">>output.txt" or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
And it's still not appending...
Command line arguments are provided in #ARGV. You can do as you please with them, including passing them as file names to open.
my ($in_qfn, $out_qfn) = #ARGV;
open(my $in_fh, '<', $in_qfn ) or die $!;
open(my $out_fh, '>', $out_qfn) or die $!;
print $out_fh $_ while <$in_fh>;
But that's not a very unixy way of doing things. In unix tradition, the following will read from every file specified on the command line, one line at a time:
while (<>) {
...
}
Output is usually placed in files through redirection.
#!/usr/bin/env perl
# This is mycat.pl
print while <>;
# Example usage.
mycat.pl foo bar > baz
# Edit foo in-place.
perl -i mycat.pl foo
The only time one usually touches #ARGV is to process options, and even then, one usually uses Getopt::Long instead of touching #ARGV directly.
Regarding your code, your script should be:
#!/usr/bin/env perl
while (<>) {
print "ERROR in line $.\n" if /Error/;
}
Usage:
perl script.pl UNSUCCESSFULOUTPUT.txt >output.txt
You can get rid of perl from the command if you make script.pl executable (chmod u+x script.pl).
This is what I believe you want:
#!usr/bin/perl
my $argument1 = $ARGV[0];
open (LOGFILE, "<$argument1") or die "Can't find file";
open (FILE, ">output.txt") or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
Ran as from the command line:
> perl nameofpl.pl mytxt.txt
For appending change this line:
open (FILE, ">output.txt") or die $!;
To the remarkably similar:
open (FILE, ">>output.txt") or die $!;
I assume you are asking how to pass an argument to a perl script. This is done with the #ARGV variable.
use strict;
use warnings;
my $file = shift; # implicitly shifts from #ARGV
print "The file is: $file\n";
You can also make use of the magic of the diamond operator <>, which will open the arguments to the script as files, or use STDIN if no arguments are supplied. The diamond operator is used as a normal file handle, typically while (<>) ...
ETA:
With the code you supplied, you can make it more flexible by doing this:
use strict;
use warnings; # always use these
my $file = shift; # first argument, required
my $outfile = shift // "output.txt"; # second argument, optional
open my $log, "<", $file or die $!;
open my $out, ">", $outfile or die $!;
while (<$log>) {
print $out "ERROR in line $.\n" if (/Error/);
}
Also see ikegami's answer on how to make it more like other unix tools, e.g. accept STDIN or file arguments, and print to STDOUT.
As I commented in your earlier question, you may simply wish to use an already existing tool for the job:
grep -n Error input.txt > output.txt

printing the output of a derefrenced variable into a file

I had a written a module, just to bifurcate a file into training and test sets. The output is fine, but it would be really easy for the students if the output of the two referenced variables, #$test and #$training were redirected to two different files. Here is the code:
use Cut;
my($training,$test)=Cut::cut_80_20('data.csv') ;
print"======TRAINING======\n"."#$training\n";
print"======TEST==========\n"." #$test\n";
print takes an optional filehandle before the data to output. Open your files and print away:
open( my $training_fh, '>', 'training.csv' ) or die "Couldn't open training.csv: $!";
print $training_fh "======TRAINING======\n"."#$training\n";
open( my $test_fh, '>', 'test.csv' ) or die "Couldn't open test.csv: $!";
print $test_fh "======TEST==========\n"." #$test\n";
It's very easy:
open my $fh1, '>', "training.out" or die "failed to open training.out ($!)";
print $fh1 "======TRAINING======\n";
print $fh1 "#$training\n";
close $fh1;
open my $fh2, '>', "test.out" or die "failed to open test.out ($!)";
print $fh2 "======TEST==========\n";
print $fh2 "#$test\n";
close $fh2;
Note the absence of a comma after the file handle in the print statements. You can add newlines and such like as necessary.