Wrote a Perl script to extract contents from 2 arrays and store it to a file (out.log) in the below format
open my $fh, ">", "out.log" or die "Cannot open out.log: $!";
for my $i (0..$#arr_1){
print $fh "$arr_1[$i]\t$arr_2[$i]\#gmail.com\n"; }
close $fh;
12345 joe#gmail.com
67890 jack#gmail.com
45678 john#gmail.com
Now by reading the out.log file content , I have to send e-mail to joe#gmail.com with e-mail body content
Your balance is:12345
to, jack#gmail.com
Your balance is:67890
to,john#gmail.com
Your balance is:45678
I can read the log file and form the mail body content like below but unsure how to achieve the above mentioned scenario.
my $mail_body = "Your balance is:\n";
{
local $/ = undef;
open FILE, "file" or die "...: !$";
$mail_body .= <FILE>;
close FILE;
}
Looking forward for help .Thanks in advance.
haven't tested but this should solve your problem.
use strict;
use warnings;
use Mail::Sendmail;
open my $fh, '<', 'output.log' or die "could not open file output.log !$";
while (my $line = <$fh>) { # read the content of the file line by line
# match and capture balance and the email address
my ($balance, $to_email) = $line =~ /(\d+)\s+(\S+)/;
%mail = ( To => $to_email,
From => 'your_eamil#address.comacine',
Messag => "your balance is: $balance"
);
sendmail(%mail);
}
close $fh;
Related
I have 2 scripts written in perl. First one takes a file and send it via socket to server. The server is my second script - and it saves to a file.
Server save file as a specified name - fixed in code. How to take the name of sending file, and send it to the server, before sending a file?
My code below:
Client:
my $socket = IO::Socket::INET->new(
PeerAddr => $local_host,
PeerPort => $local_port,
Proto => 'tcp',
)or die "Alert!";
my $bandwidth = 1024*5 ; # 5Kb/s -
open my $fh, '<', "$direc/$my_data"
or die "couldn't open the file";
my $buffer ;
while( sysread($fh, $buffer , $bandwidth) ) {
print $socket $buffer ;
sleep(1) ;
}
print "Data send.End \n" ;
close ($fh) ;
close($socket) ;
My server:
my $my_socket = new IO::Socket::INET(
LocalHost => $local_host,
LocalPort => $local_port,
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "Couldn't open my_socket $!n " unless $my_socket;
print "You can send the data now \n";
my $accepter = $my_socket->accept();
my $count=0;
#print "$directory.$save_dir/$my_data";
open my $fh, '>', "$direc/$save_dir/$my_data" #my data is the name, and it's "fixed", how to take it from client?
or die "Couldn't open the file";
while(<$accepter>){
chomp;
last if $count++ ==10;
say $fh $_;
}
print "End \n";
close $fh;
close $my_socket;
Having the server write a filename specified by the client is a security risk. The client could tell the server to overwrite files, including the server itself.
Instead, use a UUID for the real filename. Store the client filename / real filename pair elsewhere.
You need to come up with a protocol so the server can distinguish between the filename and content. We could use an existing format such as JSON or YAML, but they require slurping the whole file into memory and encoding the content. You could make something up, like "the first line is the filename", but we can do a little better.
If you want to stream, we can use a stripped down HTTP protocol. Send headers as Key: Value lines. A blank line ends headers and begins sending content. For just a little extra effort we have a simple protocol that's extensible.
Here's the main loop of the server using UUID::Tiny and also autodie.
# Read Key: Value headers until we see a blank line.
my %headers;
while(my $line = <$accepter>) {
chomp $line;
last if $line eq "";
my($key, $value) = split /\s*:\s*/, $line;
$headers{$key} = $value;
}
# Store the file in a random filename. Do not use the client's filename
# to avoid a host of security issues.
my $filename = create_uuid_as_string(UUID_V4);
open my $fh, ">", "incoming/$filename";
# Read the content and save it to the file.
my $buf;
while( read($accepter, $buf, 1024) ) {
print $fh $buf;
}
say "$headers{Filename} was stored in incoming/$filename";
close $my_socket;
And the client simply sends a Filename header before sending the file's content.
open my $fh, '<', $filename;
print $socket "Filename: $filename\n\n";
my $buffer ;
while( sysread($fh, $buffer , $bandwidth) ) {
print $socket $buffer ;
}
I'm incredibly new to Perl, and never have been a phenomenal programmer. I have some successful BVA routines for controlling microprocessor functions, but never anything embedded, or multi-facted. Anyway, my question today is about a boggle I cannot get over when trying to figure out how to remove duplicate lines of text from a text file I created.
The file could have several of the same lines of txt in it, not sequentially placed, which is problematic as I'm practically comparing the file to itself, line by line. So, if the first and third lines are the same, I'll write the first line to a new file, not the third. But when I compare the third line, I'll write it again since the first line is "forgotten" by my current code. I'm sure there's a simple way to do this, but I have issue making things simple in code. Here's the code:
my $searchString = pseudo variable "ideally an iterative search through the source file";
my $file2 = "/tmp/cutdown.txt";
my $file3 = "/tmp/output.txt";
my $count = "0";
open (FILE, $file2) || die "Can't open cutdown.txt \n";
open (FILE2, ">$file3") || die "Can't open output.txt \n";
while (<FILE>) {
print "$_";
print "$searchString\n";
if (($_ =~ /$searchString/) and ($count == "0")) {
++ $count;
print FILE2 $_;
} else {
print "This isn't working\n";
}
}
close (FILE);
close (FILE2);
Excuse the way filehandles and scalars do not match. It is a work in progress... :)
The secret of checking for uniqueness, is to store the lines you have seen in a hash and only print lines that don't exist in the hash.
Updating your code slightly to use more modern practices (three-arg open(), lexical filehandles) we get this:
my $file2 = "/tmp/cutdown.txt";
my $file3 = "/tmp/output.txt";
open my $in_fh, '<', $file2 or die "Can't open cutdown.txt: $!\n";
open my $out_fh, '>', $file3 or die "Can't open output.txt: $!\n";
my %seen;
while (<$in_fh>) {
print $out_fh unless $seen{$_}++;
}
But I would write this as a Unix filter. Read from STDIN and write to STDOUT. That way, your program is more flexible. The whole code becomes:
#!/usr/bin/perl
use strict;
use warnings;
my %seen;
while (<>) {
print unless $seen{$_}++;
}
Assuming this is in a file called my_filter, you would call it as:
$ ./my_filter < /tmp/cutdown.txt > /tmp/output.txt
Update: But this doesn't use your $searchString variable. It's not clear to me what that's for.
If your file is not very large, you can store each line readed from the input file as a key in a hash variable. And then, print the hash keys (ordered). Something like that:
my %lines = ();
my $order = 1;
open my $fhi, "<", $file2 or die "Cannot open file: $!";
while( my $line = <$fhi> ) {
$lines {$line} = $order++;
}
close $fhi;
open my $fho, ">", $file3 or die "Cannot open file: $!";
#Sort the keys, only if needed
my #ordered_lines = sort { $lines{$a} <=> $lines{$b} } keys(%lines);
for my $key( #ordered_lines ) {
print $fho $key;
}
close $fho;
You need two things to do that:
a hash to keep track of all the lines you have seen
a loop reading the input file
This is a simple implementation, called with an input filename and an output filename.
use strict;
use warnings;
open my $fh_in, '<', $ARGV[0] or die "Could not open file '$ARGV[0]': $!";
open my $fh_out, '<', $ARGV[1] or die "Could not open file '$ARGV[1]': $!";
my %seen;
while (my $line = <$fh_in>) {
# check if we have already seen this line
if (not $seen{$line}) {
print $fh_out $line;
}
# remember this line
$seen{$line}++;
}
To test it, I've included it with the DATA handle as well.
use strict;
use warnings;
my %seen;
while (my $line = <DATA>) {
# check if we have already seen this line
if (not $seen{$line}) {
print $line;
}
# remember this line
$seen{$line}++;
}
__DATA__
foo
bar
asdf
foo
foo
asdfg
hello world
This will print
foo
bar
asdf
asdfg
hello world
Keep in mind that the memory consumption will grow with the file size. It should be fine as long as the text file is smaller than your RAM. Perl's hash memory consumption grows a faster than linear, but your data structure is very flat.
I'm not much of a programmer and pretty new to Perl. I'm currently writing a little script to read from a CSV, do some evaluation on different fields and then print to another file if certain criteria are met. I thought I was pretty much done but then I got this new message:
"Usage: Text::CSV_Xs::getline(self, io) at date_compare.pl line 51, line 3."
I've been trying to find something that tells me what this message means but I'm lost. I know this is something simple. My code is below. Please excuse my ignorance.
#! /usr/local/ActivePerl-5.12/bin/perl
#this script will check to files, throw them into arrays and compare them
#to find entries in one array which meet specified criteria
#$field_file is the name of the file that contains the ablation date first,
#then the list of compared dates, then the low and high end date criteria, each
#value should end with a \n.
#$unfiltered_file is the name of the raw CSV with all the data
#$output_file is the name of the file the program will write to
use strict;
use 5.012;
use Text::CSV_XS;
use IO::HANDLE qw/getline/;
use Date::Calc qw/Decode_Date_US2 Delta_Days/;
my $csv = Text::CSV_XS->new ({ binary => 1, eol => $/ }) or
die "Cannot use CSV: ".Text::CSV->error_diag ();
my ($field_file,
$unfiltered_file,
$output_file,
#field_list,
$hash_keys,
%compare,
$check,
$i);
#Decode_Date_US2 scans a string and tries to parse any date within.
#($year,$month,$day)=Decode_Date_US2($string[,$language])
#Delta_Days returns the difference in days between the two given dates.
#$Dd = Delta_Days($year1,$month1,$day1, $year2,$month2,$day2);
sub days{
Delta_Days(Decode_Days_US2(#compare{$field_list[0]}),
Decode_Days_US2(#compare{$field_list[$i]}));
}
sub printout{
$csv->print(<OUTPUTF>, $check) or die "$output_file:".$csv->error_diag();
}
print "\nEnter the check list file name: ";
chomp ($field_file = <STDIN>);
open FIELDF, "<", $field_file or die "$field_file: $!";
chomp (#field_list=<$field_file>);
close FIELDF or die "$field_file: $!";
print "\nEnter the raw CSV file name: ";
chomp ($unfiltered_file = <STDIN>);
print "\nEnter the output file name : ";
chomp ($output_file = <STDIN>);
open OUTPUTF, ">>", $output_file or die "$output_file: $!";
open RAWF, "<", $unfiltered_file or die "$unfiltered_file: $!";
if ($hash_keys = $csv->getline(<RAWF>)){
$check = $hash_keys;
&printout();
}else{die "\$hash_keys: ".$csv->error_diag();}
while ($check = $csv->getline (<RAWF>)){
#compare{#$hash_keys}=#$check;
TEST: for ($i=1, $i==(#field_list-3), $i++){
if (&days()>=$field_list[-2] && &days()<=$field_list[-1]){
last TEST if (&printout());
}
}
Usage: Text::CSV_Xs::getline(self, io) at date_compare.pl line 51, line 3.
getline apparently expects a filehandle/IO::Handle, and you're passing it a scalar (containing a line read from the filehandle).
This means that on your line:
if ($hash_keys = $csv->getline(<RAWF>)){
You should be using:
if ($hash_keys = $csv->getline( \*RAWF )){
instead.
(But you should really be using lexical filehandles, as in:)
open FIELDF, "<", $field_file or die "$field_file: $!";
chomp (#field_list=<$field_file>); # Not sure how you expect this to work
close FIELDF or die "$field_file: $!";
would become:
open my $fieldf, "<", $field_file or die "$field_file: $!";
chomp (#field_list=<$fieldf>);
close $fieldf or die "$field_file: $!";
I'm trying to read a binary file with the following code:
open(F, "<$file") || die "Can't read $file: $!\n";
binmode(F);
$data = <F>;
close F;
open (D,">debug.txt");
binmode(D);
print D $data;
close D;
The input file is 16M; the debug.txt is only about 400k. When I look at debug.txt in emacs, the last two chars are ^A^C (SOH and ETX chars, according to notepad++) although that same pattern is present in the debug.txt. The next line in the file does have a ^O (SI) char, and I think that's the first occurrence of that particular character.
How can I read in this entire file?
If you really want to read the whole file at once, use slurp mode. Slurp mode can be turned on by setting $/ (which is the input record separator) to undef. This is best done in a separate block so you don't mess up $/ for other code.
my $data;
{
open my $input_handle, '<', $file or die "Cannot open $file for reading: $!\n";
binmode $input_handle;
local $/;
$data = <$input_handle>;
close $input_handle;
}
open $output_handle, '>', 'debug.txt' or die "Cannot open debug.txt for writing: $!\n";
binmode $output_handle;
print {$output_handle} $data;
close $output_handle;
Use my $data for a lexical and our $data for a global variable.
TIMTOWTDI.
File::Slurp is the shortest way to express what you want to achieve. It also has built-in error checking.
use File::Slurp qw(read_file write_file);
my $data = read_file($file, binmode => ':raw');
write_file('debug.txt', {binmode => ':raw'}, $data);
The IO::File API solves the global variable $/ problem in a more elegant fashion.
use IO::File qw();
my $data;
{
my $input_handle = IO::File->new($file, 'r') or die "could not open $file for reading: $!";
$input_handle->binmode;
$input_handle->input_record_separator(undef);
$data = $input_handle->getline;
}
{
my $output_handle = IO::File->new('debug.txt', 'w') or die "could not open debug.txt for writing: $!";
$output_handle->binmode;
$output_handle->print($data);
}
I don't think this is about using slurp mode or not, but about correctly handling binary files.
instead of
$data = <F>;
you should do
read(F, $buffer, 1024);
This will only read 1024 bytes, so you have to increase the buffer or read the whole file part by part using a loop.
I need to create Perl code which allows counting paragraphs in text files. I tried this and doesn't work:
open(READFILE, "<$filename")
or die "could not open file \"$filename\":$!";
$paragraphs = 0;
my($c);
while($c = getc(READFILE))
{
if($C ne"\n")
{
$paragraphs++;
}
}
close(READFILE);
print("Paragraphs: $paragraphs\n");
See perlfaq5: How can I read in a file by paragraphs?
local $/ = ''; # enable paragraph mode
open my $fh, '<', $file or die "can't open $file: $!";
1 while <$fh>;
my $count = $.;
Have a look at the Beginning Perl book at http://www.perl.org/books/beginning-perl/. In particular, the following chapter will help you: http://docs.google.com/viewer?url=http%3A%2F%2Fblob.perl.org%2Fbooks%2Fbeginning-perl%2F3145_Chap06.pdf
If you're determining paragraphs by a double-newline ("\n\n") then this will do it:
open READFILE, "<$filename"
or die "cannot open file `$filename' for reading: $!";
my #paragraphs;
{local $/; #paragraphs = split "\n\n", <READFILE>} # slurp-split
my $num_paragraphs = scalar #paragraphs;
__END__
Otherwise, just change the "\n\n" in the code to use your own paragraph separator. It may even be a good idea to use the pattern \n{2,}, just in case someone went crazy on the enter key.
If you are worried about memory consumption, then you may want to do something like this (sorry for the hard-to-read code):
my $num_paragraphs;
{local $/; $num_paragraphs = #{[ <READFILE> =~ /\n\n/g ]} + 1}
Although, if you want to keep using your own code, you can change if($C ne"\n") to if($c eq "\n").