When I save a string to a file, and than load it back from the same file, it acts differently from the original string.
It seems that the first code with the hardcoded ip address works, but the second code where I write to the file and than read back, won't work. If I print $ip after loading from the file, it looks the same.
$ip = "100.10.100.1";
$port = 1337;
socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));
connect(S,sockaddr_in($port,inet_aton($ip)));
$ip = "100.10.100.1";
my $filename = 'c:\\tmp\\ip.txt';
open(my $fh, '>', $filename);
print $fh "$ip";
close $fh;
open(my $fh, '<', $filename);
$i = 0;
while (my $row = <$fh>) {
chomp $row;
if ($i eq 0) {
$ip = $row;
}
$i = $i + 1;
}
$port = 1337;
socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));
connect(S,sockaddr_in($port,inet_aton($ip)));
OK, it seems that the perl was run with -T cmd line option, which means it runs in "taint mode" and distrust data it read from files.
When printing the errors to a file, I saw a warning
"Insecure dependency in connect while running with -T switch"
Thanks for the tips!
Related
I am opening a directory that has files that look like the following. Here is one file:
>UVWXY
ABCDEFGHIJKLMNOPQRSTUVWXYZ
>STUVW
ABCDEFGHIJKLMNOPQRSTUVWXYZ
>QRSTU
ABCDEFGHIJKLMNOPQRSTUVWXYZ
Here is a second file:
>EFGHI
ABCDEFGHIJKLMNOPQRSTUVWXYZ
Here is my code:
#!/usr/bin/perl
use warnings;
use strict;
my ($directory) = #ARGV;
my $dir = "$directory";
my #ArrayofFiles = glob "$dir/*";
open(OUT, ">", "/path/to/output.txt") or die $!;
foreach my $file(#ArrayofFiles){
open(my $fastas, $file) or die $!;
my $numberoffastas = grep{/>/}<$fastas>;
#print $numberoffastas, "\n";
while (my $line = <$fastas>){
print $line, "\n";
}
}
Nothing is printed out for $line, but this code correctly counts the number of ">"s that appear in the file when it is opened, evidenced by printing $numberoffastas.
How can I fix this code so that $line = something like:
>EFGHI
or
ABCDEFGHIJKLMNOPQRSTUVWXYZ
Thanks
my $numberoffastas = grep{/>/}<$fastas>;
calls readline on the $fastas filehandle in list context, which consumes all the input on the filehandle. At your subsequent call in while (my $line = <$fastas>), there is no more input on that filehandle to provide, and the while condition fails.
Save the inputs in an array and perform both operations on the array
my #inp = <$fastas>;
my $numberoffastas = grep {/>/} #inp;
...
foreach my $line (#inp) {
...
}
or if you are worried that the files are too large and will give you memory headaches, reopen the file
my $numberoffastas = grep {/>/} <$fastas>;
close $fastas;
open $fastas, $file;
...
while (my $line = <$fastas>) { ... }
or seek to the beginning of the file
open my $fastas, '+<', $file; # +< means random-access mode
my $numberoffastas = grep {/>/} <$fastas>;
...
seek $fastas, 0, 0; # rewind to beginning of file
while (my $line = <$fastas>) { ... }
I've already posted a question and fixed the problem in my code, but now my "specification has changed" so to say, and now I need to change some things about it.
Here's a code that takes all .txt files from the current directory, cuts off the last line of the first file, the first and the last line of every following file and the first line of the last file and writes everything in a new file (in other words: merge all files, deleting header and footer so that the new file has only one header and one footer).
#!/usr/bin/perl
use warnings;
use Cwd;
use Tie::File;
use Tie::Array;
my $cwd = getcwd();
my $buff = '';
# Get all files in cwd.
my #files = grep ( -f ,<*.txt>);
# Cut off header and footer of $files [1] to $files[$#files-1],
# but only footer of $files[0] and header of $#files[$#files]
for (my $i = 0; $i <= $#files; $i++) {
print 'Opening ' . $files[$i] . "\n";
tie (#lines, Tie::File, $files[$i]) or die "can't update $file: $!";
splice #lines, 0, 1 unless $i == 0;
splice #lines, -1, 1 unless $i == $#files;
untie #lines;
open (file, "<", $files[$i]) or die "can't update $file: $!";
while (my $line =<file>) {
$buff .= $line;
}
close file;
}
# Write the buffer to a new file.
my $allfilename = $cwd.'/Trace.txt';
print 'Writing all files into new file: ' . $allfilename . "\n";
open $outputfile, ">".$allfilename or die "can't write to new file $outputfile: $!";
# Write the buffer into the output file.
print $outputfile $buff;
close $outputfile;
My problem: I don't want to change the original files, but my code does exactly that and I'm having trouble coming up with a solution. The simplest way (simple meaning not having to change too much code) would now be, to just copy all the files to a tmp directory, messing around with them and leaving the original files untouched. Problem: a simple use of dircopy doesn't do it for me, since you have to give a new tmp dir to the dircopy function, making the code only usable for Windows or UNIX systems (but I need portability).
The next approach would be to make use of the File::Temp module but I'm really having trouble with the docs on this one.
Does anybody have a good idea on this one?
I suspected that you didn't really want your original files modified when I answered your previous question.
I don't understand why you've gone back to accumulating all the text in a buffer before printing it, or why you've removed use strict, which is essential to any well-written Perl code.
Here's my previous solution modified to leave the input data untouched.
use strict;
use warnings;
use Tie::File;
my #files = grep -f, glob '*.txt';
my $all_filename = 'Trace.txt';
open my $out_fh, '>', $all_filename or die qq{Unable to open "$all_filename" for output: $!};
for my $i ( 0 .. $#files ) {
my $file = $files[$i];
next if $file eq $all_filename;
print "Opening $file\n";
tie my #lines, 'Tie::File', $file or die qq{Can't open "$file": $!};
my ($start, $end) = (0, $#lines);
++$start unless $i == 0;
--$end unless $i == $#files;
print $out_fh "$_\n" for #lines[$start..$end];
}
close $out_fh;
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
my $outfile = 'Trace.txt';
# Get all files in cwd.
my #files = grep { -f && $_ ne $outfile } <*.txt>;
open my $outfh, '>', $outfile;
for my $file (#files) {
my #lines = do { local #ARGV = $file; <> };
shift #lines unless $file eq $files[0];
pop #lines unless $file eq $files[-1];
print $outfh #lines;
}
Just do not use Tie::File. Or is there a reason you do this, for example all your files together do not fit your memory or something?
A version very close to your current implementation would be something like the following (untested) code. It just skips the part where you update the file, just to reopen and read it afterwards. (Note that this is certainly not a very effective or overly elegant way to do this, it just sticks to your implementation as close as possible)
#!/usr/bin/perl
use warnings;
use Cwd;
# use Tie::File;
# use Tie::Array;
my $cwd = getcwd();
my $buff = '';
# Get all files in cwd.
my #files = grep ( -f ,<*.txt>);
# Cut off header and footer of $files [1] to $files[$#files-1],
# but only footer of $files[0] and header of $#files[$#files]
for (my $i = 0; $i <= $#files; $i++) {
print 'Opening ' . $files[$i] . "\n";
open (my $fh, "<", $files[$i]) or die "can't open $file for reading: $!";
my #lines = <$fh>;
splice #lines, 0, 1 unless $i == 0;
splice #lines, -1, 1 unless $i == $#files;
foreach my $line (#lines) {
$buff .= $line;
}
}
# Write the buffer to a new file.
my $allfilename = $cwd.'/Trace.txt';
print 'Writing all files into new file: ' . $allfilename . "\n";
open $outputfile, ">".$allfilename or die "can't write to new file $outputfile: $!";
# Write the buffer into the output file.
print $outputfile $buff;
close $outputfile;
Based on Miller's answer, but most suitable for large files.
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
my $outfile = 'Trace.txt';
# Get all files in cwd.
my #files = grep { -f && $_ ne $outfile } <*.txt>;
open my $outfh, '>', $outfile;
my $counter = 0;
for my $file (#files) {
open my $fh, '<', $file;
my ($line, $prev) = ('', '');
my $l = 0;
while ($line = <$fh>) {
print $outfh $prev unless $l++ == 1 and $counter > 0;
$prev = $line;
}
$counter++;
print $outfh $prev if $counter == #files and $l > 0;
close $fh;
}
I am writing a perl script which reads a text file (which contains absolute paths of many files one below the other), calculates the file names from abs path & then appends all file names separated by a space to the same file. So, consider a test.txt file:
D:\work\project\temp.txt
D:\work/tests/test.abc
C:/office/work/files.xyz
So after running the script the same file will contain:
D:\work\project\temp.txt
D:\work/tests/test.abc
C:/office/work/files.xyz
temp.txt test.abc files.xyz
I have this script revert.pl:
use strict;
foreach my $arg (#ARGV)
{
open my $file_handle, '>>', $arg or die "\nError trying to open the file $arg : $!";
print "Opened File : $arg\n";
my #lines = <$file_handle>;
my $all_names = "";
foreach my $line (#lines)
{
my #paths = split(/\\|\//, $line);
my $last = #paths;
$last = $last - 1;
my $name = $paths[$last];
$all_names = "$all_names $name";
}
print $file_handle "\n\n$all_names";
close $file_handle;
}
When I run the script I am getting the following error:
>> perl ..\revert.pl .\test.txt
Too many arguments for open at ..\revert.pl line 5, near "$arg or"
Execution of ..\revert.pl aborted due to compilation errors.
What is wrong over here?
UPDATE: The problem is that we are using a very old version of perl. So changed the code to:
use strict;
for my $arg (#ARGV)
{
print "$arg\n";
open (FH, ">>$arg") or die "\nError trying to open the file $arg : $!";
print "Opened File : $arg\n";
my $all_names = "";
my $line = "";
for $line (<FH>)
{
print "$line\n";
my #paths = split(/\\|\//, $line);
my $last = #paths;
$last = $last - 1;
my $name = $paths[$last];
$all_names = "$all_names $name";
}
print "$line\n";
if ($all_names == "")
{
print "Could not detect any file name.\n";
}
else
{
print FH "\n\n$all_names";
print "Success!\n";
}
close FH;
}
Now its printing the following:
>> perl ..\revert.pl .\test.txt
.\test.txt
Opened File : .\test.txt
Could not detect any file name.
What could be wrong now?
Perhaps you are running an old perl version, so you have to use the 2 params open version:
open(File_handle, ">>$arg") or die "\nError trying to open the file $arg : $!";
note I wrote File_handle without the $. Also, read and writting operations to the file will be:
#lines = <File_handle>;
#...
print File_handle "\n\n$all_names";
#...
close File_handle;
Update: reading file lines:
open FH, "+>>$arg" or die "open file error: $!";
#...
while( $line = <FH> ) {
#...
}
Here is code which works... Problem is when from some site can't be taken IP, scrip stop. Is there some way to make script work even IP form some site can't be taken? I need somting like in VB On Error Resume Next...
our $file = abs_path("site.txt");
open (FH, "< $file") or die "Can't open $file for read: $!";
our #lines;
while (<FH>) {
chomp($hostname="$_"); #change this to your hostname
our($addr)=inet_ntoa((gethostbyname($hostname))[4]);
our #newarr;
push(#newarr,$addr); }
Perl's exception handling mechanism is eval.
I would re-write your code (untested) as follows:
use strict;
use warnings;
use Socket;
my $file = 'test.txt';
open my $fh, '<', $file
or die "Can't open $file for read: $!";
my #addr;
while (my $hostname = <$fh>) {
last unless $hostname =~ /\S/;
$hostname =~ s/\s+\z//;
my $ip = gethostbyname $hostname;
$ip = defined($ip) ? inet_ntoa $ip : '';
push #addr, [$hostname, $ip];
}
close $fh
or die "Cannot close '$file': $!";
use YAML;
print Dump \#addr;
Note the following:
Bareword file handles are package global
our variables have package scope
If you are going to assign the return value of <$fh> to a variable, do so in the while condition, without messing with $_
When posting code, please post code that at least has a reasonable chance of compiling.
I am having trouble searching for a value and printing it. This is what I have so far. What am I doing wrong? How do i get the desired output by searching in the output?
my $host = $ARGV[0];
my $port = $ARGV[1];
my $domain = $ARGV[2];
my $bean = $ARGV[3];
my $get = $ARGV[4];
open(FILE, ">", "/home/hey");
print FILE "open $host:$port\n";
print FILE "domain $domain\n";
print FILE "bean $bean\n";
print FILE "get -s $get\n";
print FILE "close\n";
close FILE;
open JMX, "/root/jdk1.6.0_37/bin/java -jar /var/scripts/jmxterm-1.0-alpha-4-uber.jar -v silent -n < /home//hey |";
open (dbg, ">", "/home/donejava1");
#print JMX "help \n";
foreach ( <JMX> )
{
chomp;
print $_;
open (LOG, ">", "/home/out1");
print LOG $_;
close LOG;
}
//output
{
committed = 313733;
init = 3221225472;
max = 3137339392;
used = 1796598680;
}
// how do i print 1796598680, looking for the attribute "used" ?
The following example should provide a solution for you.
perl -lne'print $1 if /used\s*=\s*(\d+);/' filename