How can I append characters to a line in a file? - perl

I have a CSV file that was extracted from a ticketing system (I have no direct DB access to) and need to append a couple of columns to this from another database before creating reports off of it in Excel.
I'm using Perl to pull data out of the other database and would like to just append the additional columns to the end of each line as I process the file.
Is there a way to do this without having to basically create a new file? The basic structure is:
foreach $line (#lines) {
my ($vars here....) = split (',',$line);
## get additional fields
## append new column data to line
}

You could look at DBD::CSV to treat the file as if it were a database (which would also handle escaping special characters for you).

You can use Tie::File (in the Perl core since Perl 5.8) to modify a file in place:
#!/usr/bin/perl
use strict;
use warnings;
use Tie::File;
my $file = shift;
tie my #lines, "Tie::File", $file
or die "could not open $file: $!\n";
for my $line (#lines) {
$line .= join ",", '', get_data();
}
sub get_data {
my $data = <DATA>;
chomp $data;
return split /-/, $data
}
__DATA__
1-2-3-4
5-6-7-8
You can also use in-place-editing with the #ARGV/<> trick by setting $^I:
#!/usr/bin/perl
use strict;
use warnings;
$^I = ".bak";
while (my $line = <>) {
chomp $line;
$line .= join ",", '', get_data();
print "$line\n";
}
sub get_data {
my $data = <DATA>;
chomp $data;
return split /-/, $data
}
__DATA__
1-2-3-4
5-6-7-8

Despite any nice interfaces, you have to eventually read the file line-by-line. You might even have to do more that that if some quoted fields can have embedded newlines. Use something that knows about CSV to avoid some of those problems. Text::CSV_XS should save you most of the hassle of odd cases.

Consider using the -i option to edit <> files in-place.

Related

how to assign data into hash from an input file

I am new to perl.
Inside my input file is :
james1
84012345
aaron5
2332111 42332
2345112 18238
wayne[2]
3505554
Question: I am not sure what is the correct way to get the input and set the name as key and number as values. example "james" is key and "84012345" is the value.
This is my code:
#!/usr/bin/perl -w
use strict;
use warnings;
use Data::Dumper;
my $input= $ARGV[0];
my %hash;
open my $data , '<', $input or die " cannot open file : $_\n";
my #names = split ' ', $data;
my #values = split ' ', $data;
#hash{#names} = #values;
print Dumper \%hash;
I'mma go over your code real quick:
#!/usr/bin/perl -w
-w is not recommended. You should use warnings; instead (which you're already doing, so just remove -w).
use strict;
use warnings;
Very good.
use Data::Dumper;
my $input= $ARGV[0];
OK.
my %hash;
Don't declare variables before you need them. Declare them in the smallest scope possible, usually right before their first use.
open my $data , '<', $input or die " cannot open file : $_\n";
You have a spurious space at the beginning of your error message and $_ is unset at this point. You should include $input (the name of the file that failed to open) and $! (the error reason) instead.
my #names = split ' ', $data;
my #values = split ' ', $data;
Well, this doesn't make sense. $data is a filehandle, not a string. Even if it were a string, this code would assign the same list to both #names and #values.
#hash{#names} = #values;
print Dumper \%hash;
My version (untested):
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
#ARGV == 1
or die "Usage: $0 FILE\n";
my $file = $ARGV[0];
my %hash;
{
open my $fh, '<', $file or die "$0: can't open $file: $!\n";
local $/ = '';
while (my $paragraph = readline $fh) {
my #words = split ' ', $paragraph;
my $key = shift #words;
$hash{$key} = \#words;
}
}
print Dumper \%hash;
The idea is to set $/ (the input record separator) to "" for the duration of the input loop, which makes readline return whole paragraphs, not lines.
The first (whitespace separated) word of each paragraph is taken to be the key; the remaining words are the values.
You have opened a file with open() and attached the file handle to $data. The regular way of reading data from a file is to loop over each line, like so:
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Dumper;
my $input = $ARGV[0];
my %hash;
open my $data , '<', $input or die " cannot open file : $_\n";
while (my $line = <$data>) {
chomp $line; # Removes extra newlines (\n)
if ($line) { # Checks if line is empty
my ($key, $value) = split ' ', $line;
$hash{$key} = $value;
}
}
print Dumper \%hash;
OK, +1 for using strict and warnings.
First Take a look at the $/ variable for controlling how a file is broken into records when it's read in.
$data is a file handle you need to extract the data from the file, if it's not to big you can load it all into an array, if it's a large file you can loop over each record at a time. See the <> operator in perlop
Looking at you code it appears that you want to end up with the following data structure from your input file
%hash(
james1 =>[
84012345
],
aaron5 => [
2332111,
42332,
2345112,
18238
]
'wayne[2]' => [
3505554,
]
)
See perldsc on how to do that.
All the documentation can be read using the perldoc command which comes with Perl. Running perldoc on its own will give you some tips on how to use it and running perldoc perldoc will give you possibly far more info than you need at the moment.

Parsing a CSV file and Hashing

I am trying to parse a CSV file to read in all the other zip codes. I am trying to create a hash where each key is a zip code and the value is the number it appears in the file. Then I want to print out the contents as Zip Code - Number. Here is the Perl script I have so far.
use strict;
use warnings;
my %hash = qw (
zipcode count
);
my $file = $ARGV[0] or die "Need CSV file on command line \n";
open(my $data, '<', $file) or die "Could not open '$file $!\n";
while (my $line = <$data>) {
chomp $line;
my #fields = split "," , $line;
if (exists($hash{$fields[2]})) {
$hash{$fields[1]}++;
}else {
$hash{$fields[1]} = 1;
}
}
my $key;
my $value;
while (($key, $value) = each(%hash)) {
print "$key - $value\n";
}
exit;
You don't say which column your zip code is in, but you are using the third field to check for an existing hash element, and then the second field to increment it.
There is no need to check whether a hash element already exists: Perl will happily create a non-existent hash element and increment it to 1 the first time you access it.
There is also no need to explicitly open any files passed as command line parameters: Perl will open them and read them if you use the <> operator without a file handle.
This reworking of your own program may work. It assumes the zip code is in the second column of the CSV. If it is anywhere else just change ++$hash{$fields[1]} appropriately.
use strict;
use warnings;
#ARGV or die "Need CSV file on command line \n";
my %counts;
while (my $line = <>) {
chomp $line;
my #fields = split /,/, $line;
++$counts{$fields[1]};
}
while (my ($key, $value) = each %counts) {
print "$key - $value\n";
}
Sorry if this is off-topic, but if you're on a system with the standard Unix text processing tools, you could use this command to count the number of occurrences of each value in field #2, and not need to write any code.
cut -d, -f2 filename.csv | sort | uniq -c
which will generate something like this output, where the count is listed first, and the zipcode second:
12 12345
2 56789
34 78912
1 90210

Perl LWP::Simple File.txt in Array not spaces

The file does not have spaces and do i need to keep every word in the corresponding array,
content in var, the file is more large, but this is ok.
my $file = "http://www.ausa.com.ar/autopista/carteleria/plano/mime.txt";
&VPM4362=008000&VPM4381=FFFFFF&VPM4372=FFFFFF&VPM4391=008000&VPM4382=FFFF00&VPM4392=FF0000&VPM4182=FFFFFF&VPM4181=FFFF00&VPM4402=FFFFFF&VPM4401=FFFF00&VPM4412=008000&VPM4411=FF0000&VPM4422=FFFFFF&VPM4421=FFFFFF&VPM4322=FFFF00&CPMV001_1_Ico=112&CPMV001_1_1=AHORRE 15%&CPMV001_1_2=ADHIERASE AUPASS&CPMV001_1_3=AUPASS.COM.AR&CPMV002_1_Ico=0&CPMV002_1_1=ATENCION&CPMV002_1_2=RADARES&CPMV002_1_3=OPERANDO&CPMV003_1_Ico=0&CPMV003_1_1=ATENCION&CPMV003_1_2=RADARES&CPMV003_1_3=OPERANDO&CPMV004_1_Ico=255&CPMV004_1_1= &CPMV004_1_2=&CPMV004_1_3=&CPMV05 _1_Ico=0&CPMV05 _1_1=ATENCION&CPMV05 _1_2=RADARES&CPMV05 _1_3=OPERANDO&CPMV006_1_Ico=0&CPMV006_1_1=ATENCION&CPMV006_1_2=RADARES&CPMV006_1_3=OPERANDO&CPMV007_1_Ico=0&CPMV007_1_1=ATENCION&CPMV007_1_2=RADARES&CPMV007_1_3=OPERANDO&CPMV08 _1_Ico=0&CPMV08 _1_1=ATENCION&CPMV08
the code.
#!/bash/perl .T
use strict;
use warnings;
use LWP::Simple;
my $file = "http://www.ausa.com.ar/autopista/carteleria/plano/mime.txt";
my $mime = get($file);
my #new;
foreach my $line ($mime) {
$line =~ s/&/ /g;
push(#new, $line);
}
print "$new[0]\n";
Try this way but when I start the array is equal to (all together)
the output I need
print "$new[1]\n";
VPM4381=FFFFFF
You don't want to substitute on &, you want to split on &.
#new = split /&/, $line;

How do I modify the second column of a CSV file based on the first column?

I'm new to Perl and I have a CSV file that contains e-mails and names, like this:
john#domain1.com;John
Paul#domain2.com;
Richard#domain3.com;Richard
Rob#domain4.com;
Andrew#domain5.com;Andrew
However, as you can see a few entries/lines have the e-mail address and the ; field separator, but lack the name. I need to read line by line and and if the name field is missing, I want to print in this place the begin of the e-mail until #domainX.com. Output example:
john#domain1.com;John
Paul#domain2.com;Paul
Richard#domain3.com;Richard
Rob#domain4.com;Rob
Andrew#domain5.com;Andrew
I'm new with Perl, I did the iteration of read line by line, such this:
#!/usr/bin/perl
use warnings;
use strict;
open (MYFILE, 'test.txt');
while (<MYFILE>) {
chomp;
}
But I'm failing to parse the entries to use ; as a separator and to check if the name field is missing and consequently print the begin of the e-mail without the domain.
Can someone please give me a example based on my code?
First, if the file may contain real CSV (or space SV in your case) data (e.g. quoted fields), I'd strongly recommend using a standard Perl module to parse it.
Otherwise, a quick-and-dirty example can be:
#!/usr/bin/perl
use warnings;
use strict;
# In modern Perl, please always use 3-aqr form of open and lexical filehandles.
# More robust
open $fh, "<", 'test.txt' || die "Can not open: $!\n";
while (<$fh>) {
chomp;
my ($email, name) = split(/;/, $_);
if (!$name) {
my ($userid, $domain) = split(/\#/, $email);
$name = $userid;
}
print "$space_prefix$email;$name\n"; # Print to STDOUT for simplicity of example
}
close($fh);
Try:
#!/usr/bin/env perl
use strict;
use warnings;
for my $file ( #ARGV ){
open my$in_fh, '<', $file or die "could not open $file: $!\n";
while( my $line = <$in_fh> ){
chomp( $line );
my ( $email, $name ) = split m{ \; }msx, $line;
if( ! ( defined $name && length( $name ) > 0 ) ){
( $name ) = split m{ \# }msx, $email;
$name = ucfirst( lc( $name ));
}
print "$email;$name\n";
}
}
I am not a pearl programmer, but I would split first on the space character, and then you could iterate through the results and split by the semi-colon. Then you can check the second member of the semi-colon split array, and if it is empty, replace it with the beginning of the first member of the semi-colon split array. Then, just reverse the process, first joining by semi-colons and then by spaces.

help merging perl code routines together for file processing

I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below.
Also, any best practices would be great too as I'm learning this language. Thanks for your help.
Here's the process flow I am looking for:
read a directory
look for a particular file
use the file name to strip out some key information to create a newly processed file
process the input file
create the newly processed file for each input file read (if i read in 10, I create 10 new files)
Part 1:
my $target_dir = "/backups/test/";
opendir my $dh, $target_dir or die "can't opendir $target_dir: $!";
while (defined(my $file = readdir($dh))) {
next if ($file =~ /^\.+$/);
#Get filename attributes
if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) {
print "$1\n";
print "$2\n";
print "$3\n";
}
print "$file\n";
}
Part 2:
use strict;
use Digest::MD5 qw(md5_hex);
#Create new file
open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file";
my $data = '';
my $line1 = <>;
chomp $line1;
my #heading = split /,/, $line1;
my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D");
while (<>)
{
my $digest = md5_hex($data);
chomp;
my (#values) = split /,/;
my $extra = "__mykey__$sep1$digest$sep2" ;
$extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(#values));
$data .= "$extra$eorec";
print NEWFILE "$data";
}
#print $data;
close (NEWFILE);
You are using an old-style of Perl programming. I recommend you to use functions and CPAN modules (http://search.cpan.org). Perl pseudocode:
use Modern::Perl;
# use...
sub get_input_files {
# return an array of files (#)
}
sub extract_file_info {
# takes the file name and returs an array of values (filename attrs)
}
sub process_file {
# reads the input file, takes the previous attribs and build the output file
}
my #ifiles = get_input_files;
foreach my $ifile(#ifiles) {
my #attrs = extract_file_info($ifile);
process_file($ifile, #attrs);
}
Hope it helps
I've bashed your two code fragments together (making the second a sub that the first calls for each matching file) and, if I understood your description of the objective correctly, this should do what you want. Comments on style and syntax are inline:
#!/usr/bin/env perl
# - Never forget these!
use strict;
use warnings;
use Digest::MD5 qw(md5_hex);
my $target_dir = "/backups/test/";
opendir my $dh, $target_dir or die "can't opendir $target_dir: $!";
while (defined(my $file = readdir($dh))) {
# Parens on postfix "if" are optional; I prefer to omit them
next if $file =~ /^\.+$/;
if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) {
process_file($file, $1, $2, $3);
}
print "$file\n";
}
sub process_file {
my ($orig_name, $foo_x, $name_x, $p_x) = #_;
my $new_name = "/backups/processed/foo$foo_x.name.$name_x-foo_p$p_x.out";
# - From your description of the task, it sounds like we actually want to
# read from the found file, not from <>, so opening it here to read
# - Better to use lexical ("my") filehandle and three-arg form of open
# - "or" has lower operator precedence than "||", so less chance of
# things being grouped in the wrong order (though either works here)
# - Including $! in the error will tell why the file open failed
open my $in_fh, '<', $orig_name or die "cannot read $orig_name: $!";
open(my $out_fh, '>', $new_name) or die "cannot create $new_name: $!";
my $data = '';
my $line1 = <$in_fh>;
chomp $line1;
my #heading = split /,/, $line1;
my ($sep1, $sep2, $eorec) = ("^A", "^E", "^D");
while (<$in_fh>) {
chomp;
my $digest = md5_hex($data);
my (#values) = split /,/;
my $extra = "__mykey__$sep1$digest$sep2";
$extra .= "$heading[$_]$sep1$values[$_]$sep2"
for (0 .. scalar(#values));
# - Useless use of double quotes removed on next two lines
$data .= $extra . $eorec;
#print $out_fh $data;
}
# - Moved print to output file to here (where it will print the complete
# output all at once) rather than within the loop (where it will print
# all previous lines each time a new line is read in) to prevent
# duplicate output records. This could also be achieved by printing
# $extra inside the loop. Printing $data at the end will be slightly
# faster, but requires more memory; printing $extra within the loop and
# getting rid of $data entirely would require less memory, so that may
# be the better option if you find yourself needing to read huge input
# files.
print $out_fh $data;
# - $in_fh and $out_fh will be closed automatically when it goes out of
# scope at the end of the block/sub, so there's no real point to
# explicitly closing it unless you're going to check whether the close
# succeeded or failed (which can happen in odd cases usually involving
# full or failing disks when writing; I'm not aware of any way that
# closing a file open for reading can fail, so that's just being left
# implicit)
close $out_fh or die "Failed to close file: $!";
}
Disclaimer: perl -c reports that this code is syntactically valid, but it is otherwise untested.