How to call shell from perl script - perl

Perl script reads url from config file. In config file data stored as URL=http://example.com.
How can I get site name only. I've tried
open(my $fh, "cut -d= -f2 'webreader.conf'");
But it doesn't work.
Please, help!

You have to indicate with reading pipe -| that what follows is command which gets forked,
open(my $fh, "-|", "cut -d= -f2 'webreader.conf'") or die $!;
print <$fh>; # print output from command
Better approach would be to read file directly by perl,
open( my $fh, "<", "webreader.conf" ) or die $!;
while (<$fh>) {
chomp;
my #F = split /=/;
print #F > 1 ? "$F[1]\n" : "$_\n";
}

Maybe something like this?
$ cat urls.txt
URL=http://example.com
URL=http://example2.com
URL=http://exampleXXX.com
$ ./urls.pl
http://example.com
http://example2.com
http://exampleXXX.com
$ cat urls.pl
#!/usr/bin/perl
$file='urls.txt';
open(X, $file) or die("Could not open file.");
while (<X>) {
chomp;
s/URL=//g;
print "$_\n";
}
close (X);

Related

To trim lines based on line number in perl

My Perl file generates the text file which usually contains 200 lines. Sometimes it exceeds 200 lines (For example 217 lines). I need to trim off the rest of the lines from the 201st line. I have used the counter method to trim the exceeded lines. Is there any other simple and efficient way to do this?
Code:
#!/usr/bin/perl -w
use strict;
use warnings;
my $filename1="channel.txt";
my $filename2="channel1.txt";
my $fh;
my $fh1;
my $line;
my $line1;
my $count=1;
open $fh, '<', $filename1 or die "Can't open > $filename1: $!";
open $fh1, '>', $filename2 or die "Can't open > $filename2: $!";
while(my $line = <$fh>)
{
chomp $line;
chomp $line1;
if($count<201)
{
print $fh1 "$line\n";
}
$count++;
}
close ($fh1);
close($fh);
I have already mentioned in my comment, this is short version of that comment If you actually trying to trim the file you can use the Perl One Liner instead of writing the whole code
perl -pe 'last if($. == 201);' input.text >result.txt
-p used for process the file line by line an print the output
-e execute flag, to execute the Perl syntax
With Perl script you can do this also
open my $fh,"<","input.txt";
open my $wh,">","result.txt";
print $wh scalar <$fh> for(1..10);
xxfelixxx already gave you the correct answer. I am just changing my earlier posted answer, to clean up your code and to write back to the original file:
use strict;
use warnings;
my #array;
my $filename="channel.txt";
open my $fh, '<', $filename or die "Can't open > $filename: $!";
while( my $line = <$fh> ) {
last if $. > 200;
push #array, $line;
}
close($fh);
open $fh, '>', $filename or die "Can't open > $filename: $!";
print $fh #array;
close($fh);
There is no need to keep your own counter, perl has a special variable $. which keeps track of the input line number. You can simplify your loop like so:
while( chomp( my $line = <$fh> ) ) {
last if $. > 200;
print $fh1 "$line\n";
}
perldoc perlvar - Search for INPUT_LINE_NUMBER.
To write back to the original file: input.txt without using redirection:
perl -pi.tmp -we "last if $.>200;" input.txt
where
-i : opens a temp file and automatically replaces the file to be
edited with the temporary file after processing (the '.tmp'
is the suffix to use for the temp file during processing)
-w : command line flag to 'use warnings'
-p : magic; basically equivalent to coding:
LINE: while (defined $_ = <ARGV>)) {
"your code here"
}
-e : perl code follows this flag (enclosed in double quotes for MSWin32 aficiandos)

read two text files as an argument and display it's contents using perl

I have two text files and I want to read them by passing argument at command line.
Now how to take second file? When I give the second file name command line is not reading. Please suggest.
I have used $ARGV[0] and $ARGV[1] in the code to pass the arguments at command line.
$ ./read.pl file1 file2
Reading file1
Reading file2
$ cat read.pl
#!/usr/bin/perl
use strict;
use warnings;
readFile($_) for #ARGV;
sub readFile {
my $filename = shift;
print "Reading $filename\n";
#OPEN CLOSE stuff here
}
my ($file1, $file2) = #ARGV;
open my $fh1, '<', $file1 or die $!;
open my $fh2, '<', $file2 or die $!;
while (<$fh1>) {
do something with $_
}
while (<$fh2>) {
do something with $_
}
close $fh1;
close $fh2;
Where $_ is the default variable.
run as:
perl readingfile.pl filename1 filename2

Perl execute sql, print output to a file AND write to screen

I call a SQL file thru my perl script, which writes the output to a log file, as:
system("sqlplus -s schema/pwd\#dbname \#$sql_file > $log_file");
However, I would like to have the output written to the screen as well. Is there a way to do this (other than re-executing the command sans writing to the log file)?
You can capture the results yourself and send them to both targets.
my $output = `sqlplus -s schema/pwd\#dbname \#$sql_file`;
print $output;
open( my $file, '>', $log_file ) or die $!;
print {$file} $output;
close $file;
You can effectively tee the output of the command, and save some memory, by reading its STDOUT using a pipe:
open(my $cmdfh, "sqlplus -s schema/pwd\#dbname \#$sql_file |") or die $!;
open(my $logfh, '>', $log_file ) or die $!;
while (<$cmdfh>) {
print;
print {$logfh} $_;
}
close $logfh;
close $cmdfh;

redirection of the result in a file text

I do a perl scrip that it creates a hash directly from the contents of the first file, and then reads each line of the second, checks the hash to see if it should be printed.
Here is the perl script :
use strict;
use warnings;
use autodie;
my %permitted = do {
open my $fh, '<', 'f1.txt';
map { /(.+?)\s+\(/, 1 } <$fh>;
};
open my $fh, '<', 'f2.txt';
while (<$fh>) {
my ($phrase) = /(.+?)\s+->/;
print if $permitted{$phrase};
}
I am looking for how i print the result in a file text because this script actually print the result on the screen.
Thank you in advance.
Cordially
$ perl thescript.pl > result.txt
Will run your script and put the printed output in result.txt
Or, from within the script itself:
use strict;
use warnings;
use autodie;
my %permitted = do {
open my $fh, '<', 'f1.txt';
map { /(.+?)\s+\(/, 1 } <$fh>;
};
# Open result.txt for writing:
open my $out_fh, '>', 'result.txt' or die "open: $!";
open my $fh, '<', 'f2.txt';
while (<$fh>) {
my ($phrase) = /(.+?)\s+->/;
# print output to result.txt
print $out_fh $_ if $permitted{$phrase};
}
Open a new filehandle in write mode, then print to it. See perldoc -f print or http://perldoc.perl.org/functions/print.html for more info
...
open my $fh, '<', 'f2.txt';
open my $out_fh, '>', 'output.txt';
while (<$fh>) {
my ($phrase) = /(.+?)\s+->/;
print $out_fh $_
if $permitted{$phrase};
}
mapping the file contents first produces a list of all of the file's lines. This isn't necessarily a bad thing, unless the file's substantially large. grebneke showed how to direct output to a file, using > result.txt. Given this, and the (possible) map issue, consider just passing both files to the script from the command line, and process them using whiles:
use strict;
use warnings;
my %permitted;
while (<>) {
$permitted{$1} = 1 if /(.+?)\s+\(/;
last if eof;
}
while (<>) {
print if /(.+?)\s+->/ and $permitted{$1};
}
Usage: perl script.pl f1.txt f2.txt > result.txt
Hope this helps!

Perl script for batch file processing

I have a relatively simple question for you experts. I have 300 files in a directory that I want to process with my perl script (shown below). I was wondering if there is a way to use a variable and process in a batch of files in perl. I have a file containing a list of file name if this helps.
Your feedback will be appreciated.
====================================
#!/usr/bin/perl
use strict;
use warnings;
open (FILE1, "001.txt") or die ("Can't open file $!");
while(<FILE1>){
my $line = $_;
chomp $line;
if ( $line =~ m/^chr/ ) {
open OUT, '>>', '001_tmp.txt';
print OUT "$line\n";
}
}
close(OUT);
close(FILE1);
======================================
Clarification:
Basically I want the perl script that is equivalent to the following shell script where I can accommodate all files using the variable.
#!/bin/bash
if [[ $# != 1 ]]
then
echo "Usage: error <input>"
else
echo $# $1
export input=$1
grep "^chr" $1 > ${input}_tmp.vcf
So you want your while loop to read through each file in some given directory..
I would do something like this:
Use opendir and readdir so you can get the file names to operate on.
I would also look at grep to filter out the files you don't care about, in my example I filter out directories...
opendir(my $dh, $dir) or die "$dir: $!";
my #files = grep { !-d $_ } readdir $dh;
closedir $dh;
Now you will have a list of files to do work on...
for my $file (#files) {
open my $fh, "<", $file or die "$!";
while( my $line = <$fh> ) {
#TODO: stuff
}
close $fh;
}
Edit: Your tags indicated batch-file, meaning Windows batch file. If that's not what you mean, disregard this. :-)
Perhaps something like this:
From a batch file:
for /f %%x in (listoffilenames.txt) do (
perl myperlscript.pl %%x
)
And then your Perl script can be modified like this:
#!/usr/bin/perl
use strict;
use warnings;
# You may want to add a little more error handling
# around getting the filename, etc.
my $filename = shift or die "No filename specified.";
open (FILE1, "<", $filename) or die ("Can't open file $!");
while(<FILE1>){
my $line = $_;
chomp $line;
if ( $line =~ m/^chr/ ) {
open OUT, '>>', "temp-$filename";
print OUT "$line\n";
}
}
close(OUT);
close(FILE1);