Perl shortcuts "$/" and "$\" - perl

I am confused by the perl shortcuts as to how they are used exactly.
I am much more confused about the variables $/ and $\.
Can you please help me in this as I am new to perl scripting.

For $/: It is the input separator. When you read from an input source (e.g. a file) with my $line = <FILEHANDLE> then Perl will read as much data from the file until it encounters the content of $/. It therefore defaults to the newline character "\n" which gives us the normal understanding of what a line is.
However, when you unset $/ then Perl will read the whole input stream in one call. It's therefore a common idiom to unset $/ locally and read the whole file, e.g.
my $whole_file = do {
local $/;
<FILE_HANDLE>
};
or something similar.
$\ on the other hand is always appended after each call to print. It is by default undefined, meaning you have to add things like newline characters yourself.
All those things are explained in detail in the perlvar documentation page.

Related

PERL: String Replacement on file

I am working on a script to do a string replacement in a file and I will read the variables and values and files from a configuration file and do string replacement.
Here is my logic to do a string replacement.
sub expansion($$$){
my $f = shift(#_) ; # file Name
my $vname = shift(#_) ; # variable name for pattern match
my $value = shift(#_) ; # value to replace
my $n = "$f".".new";
open ( O, "<$f") or print( "Can't open $f file: $!");
open ( N ,">$n" ) or print( "Can't open $n file: $!");
while (<O>)
{
$_ =~ s/$vname/$value/g; #check for pattern
print N "$_" ;
}
close (O);
close (N);
}
In my logic am reading line by line in from input file ($f) for the pattern and writing to a new file ($n) .
Instead of write to a new file is there any way to do a string replacement the original file when I try to do the same it has only empty file with no contents.
Do not. Never, ever1. Don't you dare, Don't even think of, do not use subroutine prototyping. It is horribly broken (that is, it doesn't do what you think it does) and is dangerous.
Now, we got that out of the way:
Yes, you can do what you want. You can open a file as both read and writable by using the mode <+. So far, so good.
However, due to buffering, you cannot use the standard read and write methods to read and write to the file. Instead, you need to use sysread and syswrite.
Then, what you need to do is read the line, use sysseek to go back to the start of where you read, and then write to that spot.
Not only is it very complex to do, but it is full of peril. Let's take a simple example. I have a document, and I want to replace my curly quotes with straight quotes.
$line =~ s/“|”/"/g;
That should work. I'm replacing one character with another. What could go wrong?
If this is a UTF-8 file (what Macs and Linux systems use by default), those curly quotes are two-byte characters and that straight quote is a single byte character. I would be writing back a line that was shorter than the line I read in. My buffer is going to be off.
Back in the days when computer memory and storage were measured in kilobytes, and you serial devices like reel-to-reel tapes, this type of operation was quite common. However, in this age where storage is vast, it's simply not worth the complexity and error prone process that this entails. Stick with reading from one file, and writing to another. Then use unlink and rename to delete the original and to rename the copy to the original's name.
A few more pointers:
Don't print if the file can't be opened. Use die. Otherwise, your program will simply continue on blithely unaware that it is not working. Even better, use the pragma use autodie;, and you won't have to worry about testing whether or not a read/write failed.
Use scalars for file handles.
That is instead of
open OUT, ">my_file.txt";
use
open my $out_fh, ">my_file.txt";
And, it is highly recommended to use the three parameter open:
Use
open my $out_fh, ">", "my_file.txt";
If you aren't, always add use strict; and use warnings;.
In fact, your Perl syntax is a bit ancient. You need to get a book on Modern Perl. Perl originally was written as a hack language to replace shell and awk programming. However, Perl has morphed into a full fledge language that can handle complex data types, object orientation, and large projects. Learning the modern syntax of Perl will help you find errors, and become a better developer.
1. Like all rules, this can be broken, but only if you have a clear and careful understanding what is going on. It's like those shows that say "Don't do this at home. We're professionals."
sub inplace_expansion($$$){
my $f = shift(#_) ; # file Name
my $vname = shift(#_) ; # variable name for pattern match
my $value = shift(#_) ; # value to replace
local #ARGV = ( $f );
local $^I = '';
while (<>)
{
s/\Q$vname/$value/g; #check for pattern
print;
}
}
or, my preference would run closer to this (basically equivalent, changes mostly in formatting, variable names, etc.):
use English;
sub inplace_expansion {
my ( $filename, $pattern, $replacement ) = #_;
local #ARGV = ( $filename ),
$INPLACE_EDIT = '';
while ( <> ) {
s/\Q$pattern/$replacement/g;
print;
}
}
The trick with local basically simulates a command-line script (as one would run with perl -e); for more details, see perldoc perlrun. For more on $^I (aka $INPLACE_EDIT), see perldoc perlvar.
(For the business with \Q (in the s// expression), see perldoc -f quotemeta. This is unrelated to your question, but good to know. Also be aware that passing regex patterns around in variables—as opposed to, e.g., using literal regexes exclusively— can be vulnerable to injection attacks; Perl's built-in taint mode is useful here.)
EDIT: David W. is right about prototypes.

Why is the perl print filehandle syntax the way it is?

I am wondering why the perl creators chose an unusual syntax for printing to a filehandle:
print filehandle list
with no comma after filehandle. I see that it's to distinguish between "print list" and "print filehandle list", but why was the ad-hoc syntax preferred over creating two functions - one to print to stdout and one to print to given filehandle?
In my searches, I came across the explanation that this is an indirect object syntax, but didn't the print function exist in perl 4 and before, whereas the object-oriented features came into perl relatively late? Is anyone familiar with the history of print in perl?
Since the comma is already used as the list constructor, you can't use it to separate semantically different arguments to print.
open my $fh, ...;
print $fh, $foo, $bar
would just look like you were trying to print the values of 3 variables. There's no way for the parser, which operates at compile time, to tell that $fh is going to refer to a file handle at run time. So you need a different character to syntactically (not semantically) distinguish between the optional file handle and the values to actually print to that file handle.
At this point, it's no more work for the parser to recognize that the first argument is separated from the second argument by blank space than it would be if it were separated by any other character.
If Perl had used the comma to make print look more like a function, the filehandle would always have to be included if you are including anything to print besides $_. That is the way functions work: If you pass in a second parameter, the first parameter must also be included. There isn't one function I can think of in Perl where the first parameter is optional when the second parameter exists. Take a look at split. It can be written using zero to four parameters. However, if you want to specify a <limit>, you have to specify the first three parameters too.
If you look at other languages, they all include two different ways ways to print: One if you want STDOUT, and another if you're printing to something besides STDOUT. Thus, Python has both print and write. C has both printf and fprintf. However, Perl can do this with just a single statement.
Let's look at the print statement a bit more closely -- thinking back to 1987 when Perl was first written.
You can think of the print syntax as really being:
print <filehandle> <list_to_print>
To print to OUTFILE, you would say:
To print to this file, you would say:
print OUTFILE "This is being printed to myfile.txt\n";
The syntax is almost English like (PRINT to OUTFILE the string "This is being printed to myfile.txt\n"
You can also do the same with thing with STDOUT:
print STDOUT "This is being printed to your console";
print STDOUT " unless you redirected the output.\n";
As a shortcut, if the filehandle was not given, it would print to STDOUT or whatever filehandle the select was set to.
print "This is being printed to your console";
print " unless you redirected the output.\n";
select OUTFILE;
print "This is being printed to whatever the filehandle OUTFILE is pointing to\n";
Now, we see the thinking behind this syntax.
Imagine I have a program that normally prints to the console. However, my boss now wants some of that output printed to various files when required instead of STDOUT. In Perl, I could easily add a few select statements, and my problems will be solved. In Python, Java, or C, I would have to modify each of my print statements, and either have some logic to use a file write to STDOUT (which may involve some conniptions in file opening and dupping to STDOUT.
Remember that Perl wasn't written to be a full fledge language. It was written to do the quick and dirty job of parsing text files more easily and flexibly than awk did. Over the years, people used it because of its flexibility and new concepts were added on top of the old ones. For example, before Perl 5, there was no such things as references which meant there was no such thing as object oriented programming. If we, back in the days of Perl 3 or Perl 4 needed something more complex than the simple list, hash, scalar variable, we had to munge it ourselves. It's not like complex data structures were unheard of. C had struct since its initial beginnings. Heck, even Pascal had the concept with records back in 1969 when people thought bellbottoms were cool. (We plead insanity. We were all on drugs.) However, since neither Bourne shell nor awk had complex data structures, so why would Perl need them?
Answer to "why" is probably subjective and something close to "Larry liked it".
Do note however, that indirect object notation is not a feature of print, but a general notation that can be used with any object or class and method. For example with LWP::UserAgent.
use strict;
use warnings;
use LWP::UserAgent;
my $ua = new LWP::UserAgent;
my $response = get $ua "http://www.google.com";
my $response_content = decoded_content $response;
print $response_content;
Any time you write method object, it means exactly the same as object->method. Note also that parser seems to only reliably work as long as you don't nest such notations or do not use complex expressions to get object, so unless you want to have lots of fun with brackets and quoting, I'd recommend against using it anywhere except common cases of print, close and rest of IO methods.
Why not? it's concise and it works, in perl's DWIM spirit.
Most likely it's that way because Larry Wall liked it that way.

How can I have Perl take input from STDIN one character at a time?

I am somewhat a beginner at Perl (compared to people here). I know enough to be able to write programs to do many things with through the command prompt. At one point, I decided to write a command prompt game that constructed a maze and let me solve it. Besides quality graphics, the only thing that it was missing was the ability for me to use the WASD controls without pressing enter after every move I made in the maze.
To make my game work, I want to be able to have Perl take a single character as input from STDIN, without requiring me to use something to separate my input, like the default \n. How would I accomplish this?
I have tried searching for a simple answer online and in a book that I have, but I didn't seem to find anything. I tried setting $/="", but that seemed to bypass all input. I think that there may be a really simple answer to my question, but I am also afraid that it might be impossible.
Also, does $/="" actually bypass input, or does it take input so quickly that it assumes there isn't any input if I'm not already pressing the key?
IO::Prompt can be used:
#!/usr/bin/env perl
use strict;
use warnings;
use IO::Prompt;
my $key = prompt '', -1;
print "\nPressed key: $key\n";
Relevant excerpt from perldoc -v '$/' related to setting $/ = '':
The input record separator, newline by default. This influences Perl's
idea of what a "line" is. Works like awk's RS variable, including
treating empty lines as a terminator if set to the null string (an empty line cannot contain any spaces or tabs).
The shortest way to achieve your goal is to use this special construct:
$/ = \1;
This tells perl to read one character at a time. The next time you read from any stream (not just STDIN)
my $char = <STREAM>;
it will read 1 character per assignment. From perlvar "Setting $/ to a reference to an integer, scalar containing an integer, or scalar that's convertible to an integer will attempt to read records instead of lines, with the maximum record size being the referenced integer number of characters."
If you are using *nix, you will find Curses useful.
It has a getch method that does what you want.
Term::TermKey also looks like a potential solution.
IO::Prompt is no longer maintained but IO::Prompter
has a nice example (quoted from that site):
use IO::Prompter;
# This call has no automatically added options...
my $assent = prompt "Do you wish to take the test?", -yn;
{
use IO::Prompter [-yesno, -single, -style=>'bold'];
# These three calls all have: -yesno, -single, -style=>'bold' options
my $ready = prompt 'Are you ready to begin?';
my $prev = prompt 'Have you taken this test before?';
my $hints = prompt 'Do you want hints as we go?';
}
# This call has no automatically added options...
scalar prompt 'Type any key to start...', -single;

Clarification on chomp

I'm on break from classes right now and decided to spend my time learning Perl. I'm working with Beginning Perl (http://www.perl.org/books/beginning-perl/) and I'm finishing up the exercises at the end of chapter three.
One of the exercises asked that I "Store your important phone numbers in a hash. Write a program to look up numbers by the person's name."
Anyway, I had come up with this:
#!/usr/bin/perl
use warnings;
use strict;
my %name_number=
(
Me => "XXX XXX XXXX",
Home => "YYY YYY YYYY",
Emergency => "ZZZ ZZZ ZZZZ",
Lookup => "411"
);
print "Enter the name of who you want to call (Me, Home, Emergency, Lookup)", "\n";
my $input = <STDIN>;
print "$input can be reached at $name_number{$input}\n";
And it just wouldn't work. I kept getting this error message:
Use of uninitialized value in concatenation (.) or string at hello.plx
line 17, line 1
I tried playing around with the code some more but each "solution" looked more complex than the "solution" that came before it. Finally, I decided to check the answers.
The only difference between my code and the answer was the presence of chomp ($input); after <STDIN>;.
Now, the author has used chomp in previous example but he didn't really cover what chomp was doing. So, I found this answer on www.perlmeme.org:
The chomp() function will remove (usually) any newline character from
the end of a string. The reason we say usually is that it actually
removes any character that matches the current value of $/ (the input
record separator), and $/ defaults to a newline..
Anyway, my questions are:
What newlines are getting removed? Does Perl automatically append a "\n" to the input from <STDIN>? I'm just a little unclear because when I read "it actually removes any character that matches the current value of $/", I can't help but think "I don't remember putting a $/ anywhere in my code."
I'd like to develop best practices as soon as possible - is it best to always include chomp after <STDIN> or are there scenarios where it's unnecessary?
<STDIN> reads to the end of the input string, which contains a newline if you press return to enter it, which you probably do.
chomp removes the newline at the end of a string. $/ is a variable (as you found, defaulting to newline) that you probably don't have to worry about; it just tells perl what the 'input record separator' is, which I'm assuming means it defines how far <FILEHANDLE> reads. You can pretty much forget about it for now, it seems like an advanced topic. Just pretend chomp chomps off a trailing newline. Honestly, I've never even heard of $/ before.
As for your other question, it is generally cleaner to always chomp variables and add newlines as needed later, because you don't always know if a variable has a newline or not; by always chomping variables you always get the same behavior. There are scenarios where it is unnecessary, but if you're not sure it can't hurt to chomp it.
Hope this helps!
OK, as of 1), perl doesn't add any \n at input. It is you that hit Enter when finished entering the number. If you don't specify $/, a default of \n will be put (under UNIX, at least).
As of 2), chomp will be needed whenever input comes from the user, or whenever you want to remove the line ending character (reading from a file, for example).
Finally, the error you're getting may be from perl not understanding your variable within the double quotes of the last print, because it does have a _ character. Try to write the string as follows:
print "$input can be reached at ${name_number{$input}}\n";
(note the {} around the last variable).
<STDIN> is a short-cut notation for readline( *STDIN );. What readline() does is reads the file handle until it encounters the contents of $/ (aka $INPUT_RECORD_SEPARATOR) and returns everything it has read including the contents of $/. What chomp() does is remove the last occurrence contents of $/, if present.
The contents is often called a newline character but it may be composed of more than one character. On Linux, it contains a LF character but on Windows, it contains CR-LF.
See:
perldoc -f readline
perldoc -f chomp
perldoc perlvar and search for /\$INPUT_RECORD_SEPARATOR/
I think best practice here is to write:
chomp(my $input = <STDIN>);
Here is quick example how chomp function ($/ meaning is explained there) works removing just one trailing new line (if any):
chomp (my $input = "Me\n"); # OK
chomp ($input = "Me"); # OK (nothing done)
chomp ($input = "Me\n\n"); # $input now is "Me\n";
chomp ($input); # finally "Me"
print "$input can be reached at $name_number{$input}\n";
BTW: That's funny thing is that I am learning Perl too and I reached hashes five minutes ago.
Though it may be obvious, it's still worth mentioning why the chomp is needed here.
The hash created contains 4 lookup keys: "Me", "Home", "Emergency" and "Lookup"
When $input is specified from <STDIN>, it'll contain "Me\n", "Me\r\n" or some other line-ending variant depending on what operating system is being used.
The uninitialized value error comes about because the "Me\n" key does not exist in the hash. And this is why the chomp is needed:
my $input = <STDIN>; # "Me\n" --> Key DNE, $name_number{$input} not defined
chomp $input; # "Me" --> Key exists, $name_number{$input} defined

is there a way to designate the line token delimiter in Perl's file reader?

I'm reading a text file via CGI in, in perl, and noticing that when the file is saved in mac's textEdit the line separator is recognized, but when I upload a CSV that is exported straight from excel, they are not. I'm guessing it's a \n vs. \r issue, but it got me thinking that I don't know how to specify what I would like the line terminator token to be, if I didn't want the one it's looking for by default.
Yes. You'll want to overwrite the value of $/. From perlvar
$/
The input record separator, newline by default. This influences Perl's idea of what a "line" is. Works like awk's RS variable, including treating empty lines as a terminator if set to the null string. (An empty line cannot contain any spaces or tabs.) You may set it to a multi-character string to match a multi-character terminator, or to undef to read through the end of file. Setting it to "\n\n" means something slightly different than setting to "", if the file contains consecutive empty lines. Setting to "" will treat two or more consecutive empty lines as a single empty line. Setting to "\n\n" will blindly assume that the next input character belongs to the next paragraph, even if it's a newline. (Mnemonic: / delimits line boundaries when quoting poetry.)
local $/; # enable "slurp" mode
local $_ = <FH>; # whole file now here
s/\n[ \t]+/ /g;
Remember: the value of $/ is a string, not a regex. awk has to be better for something. :-)
Setting $/ to a reference to an integer, scalar containing an integer, or scalar that's convertible to an integer will attempt to read records instead of lines, with the maximum record size being the referenced integer. So this:
local $/ = \32768; # or \"32768", or \$var_containing_32768
open my $fh, "<", $myfile or die $!;
local $_ = <$fh>;
will read a record of no more than 32768 bytes from FILE. If you're not reading from a record-oriented file (or your OS doesn't have record-oriented files), then you'll likely get a full chunk of data with every read. If a record is larger than the record size you've set, you'll get the record back in pieces. Trying to set the record size to zero or less will cause reading in the (rest of the) whole file.
On VMS, record reads are done with the equivalent of sysread, so it's best not to mix record and non-record reads on the same file. (This is unlikely to be a problem, because any file you'd want to read in record mode is probably unusable in line mode.) Non-VMS systems do normal I/O, so it's safe to mix record and non-record reads of a file.
See also "Newlines" in perlport. Also see $..
The variable has multiple names:
$/
$RS
$INPUT_RECORD_SEPARATOR
For the longer names, you need:
use English;
Remember to localize carefully:
{
local($/) = "\r\n";
...code to read...
}
If you are reading in a file with CRLF line terminators, you can open it with the CRLF discipline, or set the binmode of the handle to do automatic translation.
open my $fh, '<:crlf', 'the_csv_file.csv' or die "Oh noes $!";
This will transparently convert \r\n sequences into \n sequences.
You can also apply this translation to an existing handle by doing:
binmode( $fh, ':crlf' );
:crlf mode is typically default in Win32 Perl environments and works very well in practice.
For reading a CSV file, follow Robert-P's advice in his comment, and use a CSV module.
But for the general case of reading lines from a file with different line-endings, what I generally do is slurp the file whole and split it on \R. If it's not a multi-gigabytes file, that should be the safest and easiest way.
So:
perl -ln -0777 -e 'my #lines = split /\R/;
print length($_), " bytes split into ", scalar(#lines), " lines."' $YOUR_FILE
or in your script:
{
local $/ = undef;
open F, $YOUR_FILE or die;
#lines = split /\R/, <F>;
close F;
}
\R works with Unix LF (\x0A), Windows/Internet CRLF, and also with CR (\x0D) which was used by Macs in the nineties, but is in fact still used by some Mac programs.
From the perldoc :
\R matches a generic newline; that is, anything considered a linebreak
sequence by Unicode. This includes all characters matched by \v
(vertical whitespace), and the multi character sequence "\x0D\x0A"
(carriage return followed by a line feed, sometimes called the network
newline; it's the end of line sequence used in Microsoft text files
opened in binary mode)
Or see this much nicer and exhaustive explanation about \R in Brian D Foy's article : The \R generic line ending which even has a couple of fun videos.