I open a file and print some data on the screen , but I want to clean the screen after I output the data , I use clear; in the program but I don't see the effect of clean . It didn't clean .Does there
any command or function can let me do that?
I want to see the contain of a file only , but not to see some of the previous file on the screen ...
Here is my programs
`ls > File_List`;
open List , "<./File_List";
while(eof(List)!=1)
{
$Each = readline(*List);
chomp $Each;
print $Each;
print "\n";
`clear`;
open F , "<./$Each";
while(eof(F)!=1)
{
for($i=0;$i<20;$i++)
{
$L = readline(*F);
print $L;
}
last;
}
close(F);
sleep(3);
$Each = "";
}
close List;
Thanks
Your program uses non-idiomatic Perl. A more natural style would be
#!/usr/bin/env perl
use strict;
use warnings;
no warnings 'exec';
opendir my $dh, "." or die "$0: opendir: $!";
while (defined(my $name = readdir $dh)) {
if (-T $name) {
system("clear") == 0 or warn "$0: clear exited " . ($? >> 8);
print $name, "\n";
system("head", "-20", $name) == 0 or warn "$0: head exited " . ($? >> 8);
sleep 3;
}
}
Instead of writing a list of names to another file, read the names directly with opendir and readdir. The defined check is necessary in case you have a file named 0, which Perl considers to be a false value and would terminate the loop prematurely.
You don’t want to print everything. The directory entry may be a directory or an executable image or a tarball. The -T file test attempts to guess whether the file is a text file.
Invoke the external clear command using Perl’s system.
Finally, use the external head command to print the first 20 lines of each text file.
clear isn't working because the control sequence it outputs to clear the screen is being captured and returned to your program instead of being sent to the display.
Try
print `clear`
or
system('clear')
instead
The solution you provided doesn't work because the clear command is performed in a sub-shell. I suggest the use of a CPAN module (and multi platform supported): Term::Screen::Uni
Example:
use Term::Screen::Uni;
my $screen = Term::Screen::Uni->new;
$screen->clrscr;
Use system(), it works.
system("ls > File_List");
system("clear;");
Related
I'm trying to get an Perl program working. I get an error
readline() on closed filehandle IN at Test.pl line 368, <IN> line 65.
Lines 363-369 of the program looks like this
print "Primer3 is done! \n";
my $forward = "$snpid.for";
my #forward_out;
my $i=0;
open(IN,$forward);
while(<IN>){
chomp;s/\r//;
And refers to the configuration file. The last line (line 65) looks like this
num_cpus = 40
So the configuration file is not correct, or Perl does not recognize that this is the end of the file.
Is there a way to solve this?
Update
Based on the comments I added the open() or die command and got this:
No such file or directory at Test.pl line 367.
The open command is part of a subroutine Primer3_Run
sub Primer3_Run {
my $snpid = shift;
my $seq = shift;
my $tmp_input = "Primer3.tmp.input";
my $len = length($seq);
open(OUT, ">$tmp_input");
close OUT;
if ( -e "$snpid.for" ) {
system "del
$snpid.for";
}
if ( -e "$snpid.rev" ) {
system "del $snpid.rev";
}
system
"$params{'primer3'}
Primer3.tmp.input
Primer3.Log.txt 2>&1 ";
my $forward = "$snpid.for";
my #forward_out;
my $i = 0;
open(IN, $forward) or die $!;
From what you've revealed, the problem will be this block
if ( -e "$snpid.for" ) { system "del $snpid.for"; }
followed soon afterwards by this
my $forward = "$snpid.for";
open(IN, $forward) or die $!;
The new or die $! is presumably from my advice, so previously you were deleting "$snpid.for" and then trying to open it for input. The error you saw should be expected
No such file or directory at Test.pl line 367.
You just deleted it!
All I can think of that may help is to be more organised with your coding. What did you mean when you tried to open a file that you had just made sure didn't exist?
Beyond that, you must always add use strict and use warnings 'all' at the top of every Perl program you write. Use lexical file handles together with the three-parameter form of open and always check the status of every open call, using die including the value of $! in the die string to explain why the open failed.
This perl script is traversing all directories and sub directories, searching for a file named RUN in it. Then it opens the file and runs the 1st line written in the file. The problem is that I am not able to redirect the output of the system command to a file named error.log and STDERR to another file named test_file.errorlog, but no such file is created.
Note that all variable are declared if not found.
find (\&pickup_run,$path_to_search);
### Subroutine for extracting path of directories with RUN FILE PRESENT
sub pickup_run {
if ($File::Find::name =~/RUN/) {
### If RUN file is present , push it into array named run_file_present
push(#run_file_present,$File::Find::name);
}
}
###### Iterate over the array containing paths to directories containing RUN files one by one
foreach my $var (#run_file_present) {
$var =~ s/\//\\/g;
($path_minus_run=$var) =~ s/RUN\b//;
#print "$path_minus_run\n";
my $test_case_name;
($test_case_name=$path_minus_run) =~ s/expression to be replced//g;
chdir "$path_minus_run";
########While iterating over the paths, open each file
open data, "$var";
#####Run the first two lines containing commands
my #lines = <data>;
my $return_code=system (" $lines[0] >error.log 2>test_file.errorlog");
if($return_code) {
print "$test_case_name \t \t FAIL \n";
}
else {
print "$test_case_name \t \t PASS \n";
}
close (data);
}
The problem is almost certainly that $lines[0] has a newline at the end after being read from the file
But there are several improvements you could make
Always use strict and use warnings at the top of every Perl program, and declare all your variables using my as close as possible to their first point of use
Use the three-parameter form of open and always check whether it succeeded, putting the built-in variable $! into your die string to say why it failed. You can also use autodie to save writing the code for this manually for every open, but it requires Perl v5.10.1 or better
You shouldn't put quotes around scalar variables -- just used them as they are. so chdir $path_minus_run and open data, $var are correct
There is also no need to save all the files to be processed and deal with them later. Within the wanted subroutine, File::Find sets you up with $File::Find::dir set to the directory containing the file, and $_ set to the bare file name without a path. It also does a chdir to the directory for you, so the context is ideal for processing the file
use strict;
use warnings;
use v5.10.1;
use autodie;
use File::Find;
my $path_to_search;
find( \&pickup_run, $path_to_search );
sub pickup_run {
return unless -f and $_ eq 'RUN';
my $cmd = do {
open my $fh, '<', $_;
<$fh>;
};
chomp $cmd;
( my $test_name = $File::Find::dir ) =~ s/expression to be replaced//g;
my $retcode = system( "$cmd >error.log 2>test_file.errorlog" );
printf "%s\t\t%s\n", $test_name, $retcode ? 'FAIL' : 'PASS';
}
I have two files
first:
8237764738;00:78:9E:EE:CA:6F;FTTH;MULTI
8237764738;2C:39:96:52:47:82;FTTH;MULTI
0415535921;E8:BE:81:86:F1:6F;FTTH;MULTI
0415535921;2C:39:96:5B:12:C6;EZ;SINGLE
...etc
second:
00:78:9E:EE:CA:6F;2013/10/28 13:37:50
E8:BE:81:86:F1:6F;2013/11/05 13:38:30
00:78:9E:EC:4A:B0;2013/10/28 13:59:16
2C:E4:12:AA:F7:95;2013/10/31 13:57:55
...etc
and I have to take mac_address (second position) from the first file and find it in the second one
and append (if match) to first file the date at end from the second file.
output:
8237764738;00:78:9E:EE:CA:6F;FTTH;MULTI;2013/10/28 13:37:50
0415535921;E8:BE:81:86:F1:6F;FTTH;MULTI;2013/11/05 13:38:30
I write a simple script to find the mac_address
but I don't know how to put in the script to add the date.
my %iptv;
my #result;
open IN, "/home/terminals.csv";
while (<IN>) {
chomp;
#wynik = split(/;/,$_);
$iptv{$result[1]} = $result[0];
}
close IN;
open IN, "/home/reboots.csv";
open OUT, ">/home/out.csv";
while (<IN>) {
chomp;
my ($mac, $date) = split(/;/,$_);
if (defined $iptv{$mac})
{
print OUT "$date,$mac \n";
}
}
close IN;
close OUT;
Assuming that the first file lists each MAC number once and that you want an output line for each time the MAC appears in the second file, then:
#!/usr/bin/env perl
use strict;
use warnings;
die "Usage: $0 terminals reboots\n" unless scalar(#ARGV) == 2;
my %iptv;
open my $in1, '<', $ARGV[0] or die "Failed to open file $ARGV[0] for reading";
while (<$in1>)
{
chomp;
my #result = split(/;/, $_); # Fix array used here
$iptv{$result[1]} = $_; # Fix what's stored here
}
close $in1;
open my $in2, '<', $ARGV[1] or die "Failed to open file $ARGV[1] for reading";
while (<$in2>)
{
chomp;
my ($mac, $date) = split(/;/,$_);
print "$iptv{$mac};$date\n" if (defined $iptv{$mac});
}
close $in2;
This uses two file names on the command line and writes to standard output; it is a more general purpose program than your original. It also gets me around the problem that I don't have a /home directory.
For your sample inputs, the output is:
8237764738;00:78:9E:EE:CA:6F;FTTH;MULTI;2013/10/28 13:37:50
0415535921;E8:BE:81:86:F1:6F;FTTH;MULTI;2013/11/05 13:38:30
You were actually fairly close to this, but were making some silly little mistakes.
In your code, you either aren't showing everything or you aren't using:
use strict;
use warnings;
Perl experts use both to make sure they don't make silly mistakes; beginners should do so too. It would have pointed out that #wynik was not declared with my and was assigned to but not used, for example. You could have meant to write #result = split...;. You were not saving the correct data; you were not writing out the information from the $iptv{$mac} that you needed to.
I have a perl script to which i supply input(text file) from batch or sometimes from command prompt. When i supply input from batch file sometimes the file may not exisits. I want to catch the No such file exists error and do some other task when this error is thrown. Please find the below sample code.
while(<>) //here it throws an error when file doesn't exists.
{
#parse the file.
}
#if error is thrown i want to handle that error and do some other task.
Filter #ARGV before you use <>:
#ARGV = grep {-e $_} #ARGV;
if(scalar(#ARGV)==0) die('no files');
# now carry on, if we've got here there is something to do with files that exist
while(<>) {
#...
}
<> reads from the files listed in #ARGV, so if we filter that before it gets there, it won't try to read non-existant files. I've added the check for the size of #ARGV because if you supply a list files which are all absent, it will wait on stdin (the flipside of using <>). This assumes that you don't want to do that.
However, if you don't want to read from stdin, <> is probably a bad choice; you might as well step through the list of files in #ARGV. If you do want the option of reading from stdin, then you need to know which mode you're in:
$have_files = scalar(#ARGV);
#ARGV = grep {-e $_} #ARGV;
if($have_files && scalar(grep {defined $_} #ARGV)==0) die('no files');
# now carry on, if we've got here there is something to do;
# have files that exist or expecting stdin
while(<>) {
#...
}
The diamond operator <> means:
Look at the names in #ARGV and treat them as files you want to open.
Just loop through all of them, as if they were one big file.
Actually, Perl uses the ARGV filehandle for this purpose
If no command line arguments are given, use STDIN instead.
So if a file doesn't exist, Perl gives you an error message (Can't open nonexistant_file: ...) and continues with the next file. This is what you usually want. If this is not the case, just do it manually. Stolen from the perlop page:
unshift(#ARGV, '-') unless #ARGV;
FILE: while ($ARGV = shift) {
open(ARGV, $ARGV);
LINE: while (<ARGV>) {
... # code for each line
}
}
The open function returns a false value when a problem is encountered. So always invoke open like
open my $filehandle "<", $filename or die "Can't open $filename: $!";
The $! contains a reason for the failure. Instead of dieing, we can do some other error recovery:
use feature qw(say);
#ARGV or #ARGV = "-"; # the - symbolizes STDIN
FILE: while (my $filename = shift #ARGV) {
my $filehandle;
unless (open $filehandle, "<", $filename) {
say qq(Oh dear, I can't open "$filename". What do you wan't me to do?);
my $tries = 5;
do {
say qq(Type "q" to quit, or "n" for the next file);
my $response = <STDIN>;
exit if $response =~ /^q/i;
next FILE if $response =~ /^n/i;
say "I have no idea what that meant.";
} while --$tries;
say "I give up" and exit!!1;
}
LINE: while (my $line = <$filehandle>) {
# do something with $line
}
}
I am trying to write a program that reads all files recursively from some top point into an array and subsequently read lines of filenames from a separate file, trying to print if those filenames are present in the earlier array.
my program churns through the 43K files in the directory structure and subsequently gets through about 300 of the 400 lines in the file before providing me with a spectactular "* glibc detected perl: corrupted double-linked list: 0x0000000000a30740 **"
Which I have no knowledge about at all.. Could this be an 'out of memory' type bug? I can't imagine it is not since the host has 24G of memory.
Do you have any idea where I'm going wrong? I was trying to save time and effort by reading the entire list of files from the subdirectory into an array one time and subsequently matching against it using the shorter list of files from the filename given as ARGV[0].
Here is my code:
#!/usr/bin/perl
use warnings;
use strict;
use diagnostics;
use File::Find;
use 5.010001;
## debug subroutine
my $is_debug = $ENV{DEBUG} // 0;
sub debug { print "DEBUG: $_[0]\n" if $is_debug };
## exit unless properly called with ARGV
die "Please provide a valid filename: $!" unless $ARGV[0] && (-e $ARGV[0]);
my #pic_files;
my $pic_directory="/files/multimedia/pictures";
find( sub {
push #pic_files, $File::Find::name
if -f && ! -d ;
}, $pic_directory);
open LIST, '<', $ARGV[0] or die "Could not open $ARGV[0]: $!";
while(<LIST>) {
chomp;
debug "\$_ is ->$_<-";
if ( #pic_files ~~ /.*$_/i ) {
print "found: $_\n";
} else {
print "missing: $_\n";
}
}
close LIST or die "Could not close $ARGV[0]: $!";
And here is a sample of the file:
DSC02338.JPG
DSC02339.JPG
DSC02340.JPG
DSC02341.JPG
DSC02342.JPG
DSC02343.JPG
DSC02344.JPG
DSC02345.JPG
DSC02346.JPG
DSC02347.JPG
And the obligitory error:
missing: DSC02654.JPG
DEBUG: is ->DSC02655.JPG<-
missing: DSC02655.JPG
DEBUG: is ->DSC02656.JPG<-
missing: DSC02656.JPG
*** glibc detected *** perl: corrupted double-linked list: 0x0000000000a30740 ***
======= Backtrace: =========
/lib/libc.so.6(+0x71bd6)[0x7fb6d15dbbd6]
/lib/libc.so.6(+0x7553f)[0x7fb6d15df53f]
Thanks in advance!
This is a very inefficient algorithm. You are running 21,500 * n regexes, where n is the number of files in LIST. My guess is, this is opening you up to some kind of underlying memory issue or bug.
Here is an alternative approach that would be much more efficient without many changes. First, read the files into a hash rather than an array (I added lc to make everything lowercase, since you want case-insensitive matching):
my %pic_files;
find( sub {
$pic_files{lc $File::Find::name}++
if -f && ! -d ;
}, $pic_directory);
Edit: Second, rather than using a regex to search every single file in the directory, use a regex on the input line to intelligently find potential matches.
my $path_portion = lc $_;
my $found = 0;
do {
if (exists $pic_files{$path_portion} or exists $pic_files{'/' . $path_portion} )
{
$found = 1;
}
} while (!found and $path_portion =~ /\/(.*)$/ and $path_portion = $1);
if ($found) { print "found: $_"; }
else { print "not found: $_\n"; }
This checks the path in the input file, then lops off the first directory in the path each time it does not match and checks again. It should be much faster, and hopefully this strange bug will go away (though it would be nice to figure out what was happening; if it is a bug in Perl, your version becomes very important, since smart match is a new feature that has had a lot of recent changes and bug fixes).
Although I haven't seen an error like this before, I suspect it is being caused by generating a 43,000-element list of files and using it in a smart match. Are you using a 64-bit perl?
You also make things more difficult by storing the full path to each file when all you need to match is the base file name.
This really isn't the sort of thing smart match is good for, and I suggest that you should create a hash of the file names in the input file and mark them off one by one as find comes across them
This program shows the idea. I don't have a perl installation at hand so I can't test it but it looks OK
use strict;
use warnings;
use File::Find;
my $listfile = shift;
die "Please provide a valid filename" unless $listfile;
open my $list, '<', $listfile or die "Unable to open '$listfile': $!";
my %list;
while (<$list>) {
chomp;
$list{$_} = 0;
}
close $list;
my $pic_directory = '/files/multimedia/pictures';
find( sub {
if (-f and exists $list{$_}) {
print "found: $_\n";
$list{$_}++;
}
}, $pic_directory);
for my $file (keys %list) {
print "missing: $_\n" unless $list{$file};
}