How to read a line into an array using perl - perl

I am using perl for the first time. I am trying to read a line from input file and store it in an array. Note that the the input file contains a single line with a bunch of words.
I tried using the following code:
open input, "query";
my #context = <input>;
But this gives a syntax error. How could i fix this?

It doesn't give a syntax error. IT even works fine if there's only one line. The following will only get the first line even if there are more than one:
my #context = scalar( <input> );
But why wouldn't you just do
my $context = <input>;

What is the syntax error? IMHO it writes none. But I would suggest some improvements
Always use use strict; use warnings; as a first line! It helps to detect a lot of possible problems.
Code has no error handling.
Use variables for file handlers. Using bareword is deprecated.
Open file for read if you need to only read from a file.
Maybe the ending newlines would be removed form the array.
If the file not needed to be kept opened it worth to close it. Here is not needed as exit will automatically close it implicitly, but it is a good practice to close the files explicitly.
So it could be:
#!/usr/bin/perl
use strict;
use warnings;
open my $input, "<infile" or die "$!";
my #context = map { chomp; $_;} <$input>;
close $input;

Related

What is the Perl's IO::File equivalent to open($fh, ">:utf8",$path)?

It's possible to white a file utf-8 encoded as follows:
open my $fh,">:utf8","/some/path" or die $!;
How do I get the same result with IO::File, preferably in 1 line?
I got this one, but does it do the same and can it be done in just 1 line?
my $fh_out = IO::File->new($target_file, 'w');
$fh_out->binmode(':utf8');
For reference, the script starts as follows:
use 5.020;
use strict;
use warnings;
use utf8;
# code here
Yes, you can do it in one line.
open accepts one, two or three parameters. With one parameter, it is just a front end for the built-in open function. With two or three parameters, the first parameter is a filename that may include whitespace or other special characters, and the second parameter is the open mode, optionally followed by a file permission value.
[...]
If IO::File::open is given a mode that includes the : character, it passes all the three arguments to the three-argument open operator.
So you just do this.
my $fh_out = IO::File->new('/some/path', '>:utf8');
It is the same as your first open line because it gets passed through.
I would suggest to try out Path::Tiny. For example, to open and write out your file
use Path::Tiny;
path('/some/path')->spew_utf8(#data);
From the docs, on spew, spew_raw, spew_utf8
Writes data to a file atomically. [ ... ]
spew_raw is like spew with a binmode of :unix for a fast, unbuffered, raw write.
spew_utf8 is like spew with a binmode of :unix:encoding(UTF-8) (or PerlIO::utf8_strict ). If Unicode::UTF8 0.58+ is installed, a raw spew will be done instead on the data encoded with Unicode::UTF8.
The module integrates many tools for handling files and directories, paths and content. It is often simple calls like above, but also method chaining, recursive directory iterator, hooks for callbacks, etc. There is error handling throughout, consistent and thoughtful dealing with edge cases, flock on input/ouput handles, its own tiny and useful class for exceptions ... see docs.
Edit:
You could also use File::Slurp if it was not discouraged to use
e.g
use File::Slurp qw(write_file);
write_file( 'filename', {binmode => ':utf8'}, $buffer ) ;
The first argument to write_file is the filename. The next argument is
an optional hash reference and it contains key/values that can modify
the behavior of write_file. The rest of the argument list is the data
to be written to the file.
Some good reasons to not use?
Not reliable
Has some bugs
And as #ThisSuitIsBlackNot said File::Slurp is broken and wrong

Perl: renaming doesn't work for $value filename [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I want to fill the folder with copies of the same file that would be called differently. I created a filelist.txt to get filenames using Windows cmd and then the following code:
use strict; # safety net
use warnings; # safety net
use File::NCopy qw(copy);
open FILE, 'C:\blabla\filelist.txt';
my #filelist = <FILE>;
my $filelistnumber = #filelist + 1;
my $file = 0;
## my $filename = 'null.txt';
my $filename = $filelist[$file];
while( $file < $filelistnumber ){
copy('base.smp','temp.smp');
rename 'temp.smp', $filename;
$file = $file + 1;
};
If I try renaming it into 'test.smp' or whatever, it works. If I try the code above, I get this:
Use of uninitialized value $filename in print at blablabla/bla/bla.pl line 25, <FILE> line 90.
What am I doing wrong? I feel there's some kind of little mistake, a syntax mistake probably, that keeps evading me.
First, here's some improved code:
use strict;
use warnings;
use File::Copy;
while (<>) {
chomp;
copy('base.smp', $_) or die $!;
}
You'll save it as script.pl and invoke it like this:
$ perl script.pl C:\blabla\filelist.txt
In what ways is this code an improvement?
It uses the core module File::Copy instead of the deprecated File::NCopy.
It uses the null filehandle or "diamond operator" (<>) to implicitly iterate over a file given as a command line parameter, which is simple and elegant.
It handles errors in the event that copy() fails for some reason.
It doesn't use a while loop or a C-style for loop to iterate over an array, which are both prone to off-by-one errors and forgetting to re-assign the iterator, as you've discovered.
It doesn't use the old 2-argument syntax for open(). (Well, not explicitly, but that's kind of beyond the scope of this answer.)
What am I doing wrong? I feel there's some kind of little mistake, a
syntax mistake probably, that keeps evading me.
A syntax error would have resulted in an error message saying that there was a syntax error. But since you asked what you're doing wrong, let's walk through it:
use File::NCopy qw(copy);
This module was last updated in 2007 and is marked as deprecated. Don't use it.
open FILE, 'C:\blabla\filelist.txt';
You should use the three-argument form of open, use a lexical filehandle, and always check the return values of system calls.
my #filelist = <FILE>;
Rarely do you need to slurp an entire file into memory. In this case, you don't.
my $filelistnumber = #filelist + 1;
There's nothing inherently wrong with this line, but there is when you consider how you're using it later on. Remember that arrays are 0-indexed, so you've just set yourself up for an out of bounds array index. But we'll get to that in a second.
my $filename = $filelist[$file];
You would typically want to do this assignment inside your loop, lest you forget to update it after incrementing your counter (which is exactly what happened here).
while( $file < $filelistnumber ){
This is an odd way to iterate over an array in Perl. You could use a typical C-style for loop, but the most Perlish thing to do would be to use a foreach-style loop:
for my $element (#array) {
...
}
Each element of the list is localized to the loop, and you don't have to worry about counters, conditions, or array bounds.
copy('base.smp','temp.smp');
Again, always check the return values of system calls.
rename 'temp.smp', $filename;
No need to do a copy and a rename. You can copy to your final destination filename the first time. But if you are going to rename, always check the return values of system calls.
};
Blocks don't need to be terminated with a semicolon like simple statements do.
You should avoid using bareword file handles. When opening you should open using a file reference like and make sure you catch it if it fails:
open(my $fh, '<', 'C:\blabla\filelist.txt') or die "Cannot open filelist.txt: $!";
The $fh variable will contain your file reference.
For your problem it looks as though your filelist.txt must be empty. Try using Data::Dumper to print out your #filelist to determine it's contents.
use Data::Dumper;
EDIT:
Looks like you are also wanting to be setting the $filename variable to the next one in the list for each iteration, so put $filename = $filelist[$file]; at the beginning of your loop.
Your problem could be that you are looping too far? Try getting rid of the + 1 in my $filelistnumber = #filelist + 1;

perl - string compare failing while fetching a line from a file.

my code,
#!/usr/bin/perl -w
use strict;
use warnings;
my $codes=" ";
my $count=0;
my $str1="code1";
open (FILE, '/home/vpnuser/testFile.txt') or die("Could not open the file.");
while($codes=<FILE>)
{
print($codes);
if($codes eq $str1)
{
$count++;
}
}
print "$count";
the comparison always fails. my testFile.txt contains one simple line - code1
when i have written a separate perl script where i have two strings declared in the script it self rather than getting it from a file, the eq operator works fine. but when i am getting it from a file, there is a problem. Pease help,
Thanks in advance!
Don't forget to chomp your file input if you don't want it to end in a return character.
while(my $codes = <FILE>)
{
chomp $codes;
That is likely the reason why your string comparison is failing.
As on additional aside, kudus for including use strict; and use warnings; at the the top of your script, like one should always do.
I'd like to recommend that you also include use autodie; at the top as well when doing file processing. It will automatically give you a detailed error message for doing many kinds of operations, such as opening a file, so you won't have to remember to include the error code $! or the filename in your die statement.

Parallel reading of input file with Parallel::Loops module

I often come across a scenario where I need to parse a very large input file and then process the lines for final output. With many of these files it can take a while to process.
Since it's usually the same process, and usually I want to stored the processed data to a hash for the final manipulation, it seems that maybe something like Parallel::Loops would be helpful and speed the process up.
If I'm not thinking this through correctly, please let me know.
I've used Parallel::Loops before to process many files at a time with great results, but I can't figure out how to process many lines from one file as I don't know how to pass each line of the file in as a reference.
If I try to do this:
#!/usr/bin/perl
use warnings;
use strict;
use Data::Dumper;
use Parallel::Loops;
my $procs = 12;
my $pl = Parallel::Loops->new($procs);
my %data;
$pl->share(\%data);
my $input_file = shift;
open( my $in_fh, "<", $input_file ) || die "Can't open the file for reading: $!";
$pl->while( <$in_fh>, sub {
<some kind of munging and processing here>
});
I get the error:
Can't use string ("6334") as a subroutine ref while "strict refs" in use at /usr/local/share/perl/5.14.2/Parallel/Loops.pm line 518, <$in_fh> line 501.
I know that I need to pass a reference to the parallel object but I can't figure out how to make a reference to a readline element.
I also know that I can slurp the whole file in first and then pass an array reference of all of the lines, but for very large files that takes a lot of memory, and intuitively a lot more time as it technically needs to then read the file twice.
Is there a way to pass each line of a file into the Parallel::Loops object so that I can process many of the lines of a file at once?
I'm not in a position to test this as my laptop doesn't have Parallel::Loops installed and I have no consistent internet access.
However, from the documentation, the while method clearly takes two subroutine reference for parameters and you are passing <$in_fh> as the first. The method probably coerces its parameters to scalars using a prototype, so that means you are passing a simple string where a subroutine reference is expected.
Because of my situation I am far from certain, but you may get a result from
$pl->while(
sub {
scalar <$in_fh>;
},
sub {
# Process a line of data
}
);
I hope this helps. I will investigate further when I get home on Friday.

How to append to a file?

I am trying to append some text to the end of a file in Mac OSX having a .conf extension. I am using the following code to do that:
open NEW , ">>$self->{natConf}";
print NEW "$hostPort = $vmIP";
where
$self->{natConf} = \Library\Preferences\VMware Fusion\vmnet8\nat.conf
So basically this is a .conf file. And even though its not returning any error, but it is not appending anything to the end of the file. I checked all the permissions, and read-write privilege has been provided. Is there anything I am missing here.
First of all use strict and use warnings. This would have thrown errors and warnings for your code.
On Mac OS the delimiter in a path is / like in other unix-like systems not \.
To asign a string to a variable use quotation marks.
Do not use open(2) but open(3) (the arrow operator does not work in your usage of open anyway) and it is considered bad practice to use bareword filehandlers.
use strict;
use warnings;
# your code here
$self->{natConf} = '/Library/Preferences/VMware Fusion/vmnet8/nat.conf';
# more code here
open my $fh, '>>', $self->{natConf} or die "open failed: $!\n";
print $fh "$hostPort = $vmIP";
close $fh;
# rest of code here
Suffering from buffering? Call close NEW when you are done writing to it, or call (*NEW)->autoflush(1) on it after you open it to force Perl to flush the output after every print.
Also check the return values of the open and print calls. If either of these functions fail, they will return false and set the $! variable.
And I second the recommendation about using strict and warnings.