Print different value each time key is called - perl

I need some help with Perl.
I have a list of IDs & corresponding values in a file. Each ID acts as a key in a hash of hashes so there are multiple values for each key. I'm trying to open a second file & assign a different value each time the key is encountered. Here is what I have so far:
This code takes the input file & builds the hash of hashes. $prot is the key & $dir is the value. Each key has multiple values.
open (IN, "file_name");
while (<IN>)
{
($prot, $dir) = split;
push (#{$dir{$prot}}, $dir );
}
In the second part of the code, I would like to read each line of the file & assign a different value using the first column in the line as the key. Each key will appear multiple times in the second file & for each instance I would like it to print a different value.
open (FH, "results_file");
while (<FH>)
{
chomp;
#a=split;
$prot=$a[1];
foreach (values %dir)
{print "$a[1]"."\t"."#{$dir{$prot}}"."\n";}
}
Right now the way the code is written it prints all the values for each key when it encounters the key.
Thanks so much for any help that can be offered!
Edit:
The first input file is something along the lines of
BC_123456 dir_6789
BC_456789 dir_3456
BC_234689 dir_1298
BC_123456 dir_3987
BC_876432 dir_7642

Each ID acts as a key in a hash of hashes
You actually have a hash of arrays there.
Assuming you want to print the first value for the first instance, second for the second instance, and so on, you can just shift off the values for each key you encounter:
open (FH, "results_file");
while (<FH>)
{
chomp;
#a=split;
$prot=$a[1];
foreach (values %dir) {
my $val = shift #{$dir{$prot}};
print "$a[1]\t$val\n";
}
}
This will remove one value from the HoA entry, assuming you don't need to use that array afterwards.

I think this gets your code working. There are some best practices to note:
You should always run under strict & warnings.
use strict;
use warnings;
I recommend the three argument version of open(), with an error check (or die "message $!";), and a lexical variable to store the filehandle.
See:
perldoc -f open
perldoc perlopentut
Close your files when done using them.
Variables should be introduced (declared) with my unless you have a good reason to use something else.
I also made a couple changes that while I recommend, they aren't necessary.
Removed variable #a because you did not actually need it.
Cleaned up your print because it was hard to read. You could also try printf
You have two files named dir (%dir and $dir) which I find confusing in this case so I renamed %dir to %dirs.
CODE:
use strict;
use warnings;
my %dirs;
# Part 1 - Input
my $filename_input = "file_name.txt";
open(my $IN,'<',$filename_input) or die "Unable to open [$filename_input] for reading - $!";
while(<$IN>) {
my ($prot, $dir) = split;
push #{$dirs{$prot}}, $dir;
}
close $IN;
# Part 2 - Output
my $filename_results = "results_file.txt";
open(my $RESULTS,'<',$filename_results) or die "Unable to open [$filename_results] for reading - $!";
while(<$RESULTS>) {
chomp;
my $prot = (split)[1];
foreach (values %dirs) {
print "$prot\t#{$dirs{$prot}}\n"; # Or try: printf "%s\t%s\n",$prot,"#{$dirs{$prot}}";
}
}
close $RESULTS;
file_name.txt
BC_123456 dir_6789
BC_456789 dir_3456
BC_234689 dir_1298
BC_123456 dir_3987
BC_876432 dir_7642
results_file.txt
don'tcare BC_123456
don'tcare BC_234689

Related

Creating multiple hashes from multiple files in one go

I want to perform a vlookup like process but with multiple files wherein the contents of the first column from all files (sorted n uniq-ed) is reference value. Now I would like to store these key-values pairs from each file in each hash and then print them together. Something like this:
file1: while(){$hash1{$key}=$val}...file2: while(){$hash2{$key}=$val}...file3: while(){$hash3{$key}=$val}...so on
Then print it: print "$ref_val $hash1{$ref_val} $hash3{$ref_val} $hash3{$ref_val}..."
$i=1;
#FILES = #ARGV;
foreach $file(#FILES)
{
open($fh,$file);
$hname="hash".$i; ##trying to create unique hash by attaching a running number to hash name
while(<$fh>){#d=split("\t");$hname{$d[0]}=$d[7];}$i++;
}
$set=$i-1; ##store this number for recreating the hash names during printing
open(FH,"ref_list.txt");
while(<FH>)
{
chomp();print "$_\t";
## here i run the loop recreating the hash names and printing its corresponding value
for($i=1;$i<=$set;$i++){$hname="hash".$i; print "$hname{$_}\t";}
print "\n";
}
Now this where I am stuck perl takes $hname as hash name instead of $hash1, $hash2...
Thanks in advance for the helps and opinions
The shown code attempts to use symbolic references to construct variable names at runtime. Those things can raise a lot of trouble and should not be used, except very occasionally in very specialized code.
Here is a way to read multiple files, each into a hash, and store them for later processing.
use warnings;
use strict;
use feature 'say';
use Data::Dump qw(dd);
my #files = #ARGV;
my #data;
for my $file (#files) {
open my $fh, '<', $file or do {
warn "Skip $file, can't open it: $!";
next;
};
push #data, { map { (split /\t/, $_)[0,6] } <$fh> };
}
dd \#data;
Each hash associates the first column with the seventh (index 6), as clarified, for each line. A reference to such a hash for each file, formed by { }, is added to the array.
Note that when you add a key-value pair to a hash which already has that key the new overwrites the old. So if a string repeats in the first column in a file, the hash for that file will end up with the value (column 7) for the last one. The OP doesn't discuss possible duplicates of this kind in data files (only for the reference file), please clarify if needed.
The Data::Dump is used only to print; if you don't wish to install it use core Data::Dumper.
I am not sure that I get the use of that "reference file", but you can now go through the array of hash references for each file and fetch values as needed. Perhaps like
open my $fh_ref, '<', $ref_file or die "Can't open $ref_file: $!";
while (my $line = <$fh_ref>) {
my $key = ... # retrieve the key from $line
print "$key: ";
foreach my $hr (#data) {
print "$hr->{$key} ";
}
say '';
}
This will print key: followed by values for that string, one from each file.

I want to replace a sequence name in fasta file with another name

I have one fasta file and one text file fasta file contains sequences in fasta format and text file contains name of genes now I want to replace name of the sequences in fasta file after '>' sign with the gene names in text file
I am new to perl though I have written a script but I don't know why its not working can anyone help me on that please
following is my script:
print"Enter annotated file...";
$f1=<STDIN>;
print"Enter sequence file...";
$f2=<STDIN>;
open(FILE1,$f1) || die"Can't open $f1";
#annotfile=<FILE1>;
open(FILE2,$f2) || die"Can't open $f2";
#seqfile=<FILE2>;
#d=split('\t',#annotfile[0]);
for($i=0;$i<scalar(#annotfile);$i++)
{
#curr_all=split('\t',#annotfile[$i]);
#curr_id[$i]=#curr_all[0];
#gene_nm[$i]=#curr_all[1];
}
for($j=0;$j<scalar(#seqfile);$j++)
{
$id=#curr_id[$j];
$gene=#gene_nm[$j];
#seqfile[$j]=~s/$id[$j]/$gene[$j]/g;
print #seqfile[$j];
}
my files looks like following:
annot.txt
pool75_contig_389 ubiquitin ligase e3a
pool75_contig_704 tumor susceptibility
pool75_contig_1977 serine threonine-protein phosphatase 4 catalytic subunit
pool75_contig_3064 bardet-biedl syndrome 2 protein P
pool75_contig_2499 succinyl- ligase
goat300.fasta
goat300.fasta
>pool75_contig_704
CCCTTTCTCCCTTCCCAACATTCAGAGATACTGAATCGAAACTCTTACTGTCTGTTAGAT
GACAAAGAGTTATCCATCCTACATACTCCAATTTCCTTCCGCAACTTGTGATTTCGCCGC
TTGAATCTTGACGCCGTGCGTCCACAGTTTGTTGTGTTTTATCAATCAAGGTCATTATCA
ACCGAAGACGCTATCTATTTTCTTGGCGAAGCTCTCGGAAAGGAGCCATCGAAATGGAAG
TATTTCTCAAGAAAGTCCGCGAGTTATCCCGGAAGCAGTTC
>pool75_contig_389
GACCTATACCGGACCGTCACTGAAAGNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
ACGATCCAGGCATGGAGTTGTGGTGACGAGTAGGAGGGTCACCGTGGTGAGCGGGAAGCC
TCGGGCGTGAGCCTGGGTGGAGCCGCCACGGGTGCAGATCTTGGTGGTAGTAGCAAATAT
TCAAGTGAGAACCTTGAAGGCCGAGGTGGAGAAGGNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNTCATTTGTAT
CGCCCGGAAAACGTCACAAGAACGGGAGTTGCGTACAGAA
>pool75_contig_1977
AAGGGACACCGTTGGGTGAGGCGAGCTGCGTTCCTCGAACCATGGCTTCAAAAAGCGACT
TAGACCGTCAGATTGAACAGCTCAGGGCCTGCAAGCTCATTACAGAGGATGAGGTTAAGG
CACTCTGCGCTAAGGCGCGTGAGATTTTAATTGAAGAGAGTAATGTCCAGTGCGTGGACT
CACCTGTCACGGTTTGTGGCGATATCCACGGCCAGTTTTACGACTTGATTGAACTGTTTA
AAGTGGGCGGAGATGTTC
>pool75_contig_3064
TTACTATTTCTGGGCCTTAAGACTGGCTTAGTCGCTTACGACCCTTATAACAATGTAGAT
GTATATTATAAGGATCTTCCTGATGGTGCTAACGCTATGTTAATTTATTCAAACTCACCG
ACAAAGGAACAGAATATGCTTTGGCAGGTGGAAACTGTTCGATAATTGGATTGAACGACG
GCGGATGCGAGGTATTTTGGACAGTCACTGGCGACTCCGTTTGCTCTCTTTGCTCGATTA
AATCCGACAGCGATAAGTCAAGAGATTTTGTGGTTGGCTCTGAAGATTTTGACATCCGAA
TCTTCCATGGGGATGCCATAATATATGAAATCACGGAGTCTGATG
>pool75_contig_2499
AAGAGAAGAGGTGAGTTTGAGTATTGTTTGTGTGTGTGTGGTTGGGTGAGTGTGTGGTAT
GTGGTGTATGTGTGTGATGAATGTATGTGAAAGAGAGTGATGAATCTCATGGATATGTTC
GAGTTCGTGGTTTCCATTGATCGGTTATAGCCGAGATGATGGATGTGTTCCATGTGTCTG
ATTTCAGTTTAGGATTGTGTTGATGATGTTGATGATGAAAATTGTTGATGGTGATGACGA
TAGTGATGATGATGACGATGTTTCGGATAATGGTGATGATGATGATGGTTCCGACGATGA
TGTTTCGCTTGATGATGGTGATAATGATGACTCCGAAAATAACGTTGACTCGGATGAG
Consider using Bio::SeqIO to parse your Fasta dataset, instead of doing it yourself. Bio::SeqIO lives for this task, and is well developed for it. Additionally, if you're in bioinformatics, it would serve you well to get to know Bio::SeqIO. Given this, consider the following:
use strict;
use warnings;
use Bio::SeqIO;
open my $fh, '<', 'annot.txt' or die $!;
my %annot = map { /(\S+)\s+(.+)/; $1 => $2 } <$fh>;
close $fh;
my $in = Bio::SeqIO->new( -file => 'goat300.fasta', -format => 'Fasta' );
while ( my $seq = $in->next_seq() ) {
my $seqID = $annot{ $seq->id } // $seq->id;
print "$seqID\n" . $seq->seq . "\n";
}
Output on your datasets:
tumor susceptibility
CCCTTTCTCCCTTCCCAACATTCAGAGATACTGAATCGAAACTCTTACTGTCTGTTAGATGACAAAGAGTTATCCATCCTACATACTCCAATTTCCTTCCGCAACTTGTGATTTCGCCGCTTGAATCTTGACGCCGTGCGTCCACAGTTTGTTGTGTTTTATCAATCAAGGTCATTATCAACCGAAGACGCTATCTATTTTCTTGGCGAAGCTCTCGGAAAGGAGCCATCGAAATGGAAGTATTTCTCAAGAAAGTCCGCGAGTTATCCCGGAAGCAGTTC
ubiquitin ligase e3a
GACCTATACCGGACCGTCACTGAAAGNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNACGATCCAGGCATGGAGTTGTGGTGACGAGTAGGAGGGTCACCGTGGTGAGCGGGAAGCCTCGGGCGTGAGCCTGGGTGGAGCCGCCACGGGTGCAGATCTTGGTGGTAGTAGCAAATATTCAAGTGAGAACCTTGAAGGCCGAGGTGGAGAAGGNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNTCATTTGTATCGCCCGGAAAACGTCACAAGAACGGGAGTTGCGTACAGAA
serine threonine-protein phosphatase 4 catalytic subunit
AAGGGACACCGTTGGGTGAGGCGAGCTGCGTTCCTCGAACCATGGCTTCAAAAAGCGACTTAGACCGTCAGATTGAACAGCTCAGGGCCTGCAAGCTCATTACAGAGGATGAGGTTAAGGCACTCTGCGCTAAGGCGCGTGAGATTTTAATTGAAGAGAGTAATGTCCAGTGCGTGGACTCACCTGTCACGGTTTGTGGCGATATCCACGGCCAGTTTTACGACTTGATTGAACTGTTTAAAGTGGGCGGAGATGTTC
bardet-biedl syndrome 2 protein P
TTACTATTTCTGGGCCTTAAGACTGGCTTAGTCGCTTACGACCCTTATAACAATGTAGATGTATATTATAAGGATCTTCCTGATGGTGCTAACGCTATGTTAATTTATTCAAACTCACCGACAAAGGAACAGAATATGCTTTGGCAGGTGGAAACTGTTCGATAATTGGATTGAACGACGGCGGATGCGAGGTATTTTGGACAGTCACTGGCGACTCCGTTTGCTCTCTTTGCTCGATTAAATCCGACAGCGATAAGTCAAGAGATTTTGTGGTTGGCTCTGAAGATTTTGACATCCGAATCTTCCATGGGGATGCCATAATATATGAAATCACGGAGTCTGATG
succinyl- ligase
AAGAGAAGAGGTGAGTTTGAGTATTGTTTGTGTGTGTGTGGTTGGGTGAGTGTGTGGTATGTGGTGTATGTGTGTGATGAATGTATGTGAAAGAGAGTGATGAATCTCATGGATATGTTCGAGTTCGTGGTTTCCATTGATCGGTTATAGCCGAGATGATGGATGTGTTCCATGTGTCTGATTTCAGTTTAGGATTGTGTTGATGATGTTGATGATGAAAATTGTTGATGGTGATGACGATAGTGATGATGATGACGATGTTTCGGATAATGGTGATGATGATGATGGTTCCGACGATGATGTTTCGCTTGATGATGGTGATAATGATGACTCCGAAAATAACGTTGACTCGGATGAG
The hash %annot is initialized by reading and capturing the contents of your annot.txt data. A Bio::SeqIO object is created using your goat300.fasta file data. The while loop iterates through your fasta sequences. The variable $seqID either takes the associated value of the key in the %annot hash or it keeps the current sequence ID (the // notation means defined or, so that insures $seqID will be defined). Finally, the Fasta record is printed.
Hope this helps!
There were a lot of warnings in your code, and your approach was inefficient. Let me first show you a working Perl program. I'll explain afterwards.
#!/usr/bin/perl
use strict;
use warnings;
# Read the annotations file
print"Enter annotated file...\n";
# my $f1 = <STDIN>;
my $f1 = 'annot.txt';
open(my $fh_annotations, '<', $f1) or die "Can't open $f1";
my #annotfile = <$fh_annotations>;
close $fh_annotations;
# Read the sequence file
print"Enter sequence file...\n";
# my $f2 = <STDIN>;
my $f2 = 'goat300.fasta';
open(my $fh_genes, '<', $f2) or die "Can't open $f2";
my #seqfile = <$fh_genes>;
close $fh_genes;
# Process the annotations data
my %names; # this hash is going to hold the names
foreach my $line (#annotfile) {
chomp $line; # remove newline
my #fields = split /\t/, $line; # split into array
$names{$fields[0]} = $fields[1]; # save in the hash as key->value pair
}
# Process the sequence data
foreach my $line (#seqfile) {
# Look at each line
if ($line =~ m/>(.+)$/) {
# If there is a heading there, remember it...
if (exists $names{$1}) {
# ... check if we know a name for it and replace it in the line
$line =~ s/($1)/$names{$1}/;
}
}
# output the line (this would be done to another filehandle)
print $line;
}
This reads both files and saves them in memory, just like yours did. But instead of trying to build two arrays for the names, I went with a hash, which is a key/value pair. Think of it like an array with names instead of numbers and no particular sorting.
Once these names are set up, I can process the sequence file. I simply look at each line and check if there is a heading there, by looking for the > sign. If it's there (it goes into $1 because of the parenthesis), I look if we have a hash entry (with exists) in our %names hash. If we do, we can replace the heading with the proper name.
After that, we could write it out to a new file. I'm just printing it.
I've used a few other techniques. Unfortunately the literature people get in a BioPerl context is quite outdated. Please take this advice, it will make your live easier.
Always use strict and warnings. They will tell you about problems with your code.
Always declare your variables with my. This is not like other languages, where you need to set up a variable at the top of your problem. You can declare it where you need it. The vars only live in a certain scope, which means between the nearest enclosing { and } brackets, or block.
Use three-argument open and lexical file handles for security. Read more here.
Perl offers foreach as an alternative to the C for loop. In this case, it made things a lot easier.
One more thing about this program: While this example data was rather short, I believe your actual data might be a lot larger. Consider processing the sequence file while you read it so you do not run out of memory. There's no need to save all the lines, unless you want to do something else with them.
open my $fh_out, '>', $filename_out or die $!;
open my $fh_in, '<', $filename_in or die $!;
while (my $line = <$fh_in>) {
# do stuff with the line, like your regex
print $fh_out $line;
}
close $fh_in;
close $fh_out;

Perl - Move Pointer to Start of Line

I have 2 files.
Obfuscated file called input.txt
A second file called mapping.txt consisting of key value pairs.
I want to find every occurrence of the key from mapping.txt in input.txt and replace it with the value corresponding to the key.
Please note that I want to overwrite the contents of the line in input.txt everytime a successful match occurs.
I have written the following code:
#! /usr/bin/perl
use strict;
use warnings;
(my $mapping,my $input)=#ARGV;
open(MAPPING,'<',$mapping) || die("couldn't read from the file, $mapping with error: $!\n");
while(<MAPPING>)
{
chomp $_;
my $line=$_;
(my $key,my $value)=split("=",$line);
open(INPUT,'+<',$input);
while(<INPUT>)
{
chomp $_;
if(index($_,$key)!=-1)
{
$_=~s/\Q$key/$value/g;
# move pointer to beginning of line
print INPUT $_."\n";
}
}
close INPUT;
}
close MAPPING;
Brief Overview of the code:
Opens the mapping.txt file in read mode.
Since each line is a key value pair, it splits it into key and value.
Opens the input.txt file in overwrite mode.
Checks if the key is found in the current line.
If the key is found, then substitute the key with the value ignoring any meta characters in the key (by prefixing \Q)
At this point, the file pointer would be at the end of the line since the previous statement would scan the entire line to find the key and replace it.
If I could move the file pointer to the start of the line, then I can overwrite with:
print INPUT $_,"\n"
I tried looking up the seek function however unable to figure out a way to use it for this purpose.
Once this is done, then the code will close the file. It will pick the next key value pair from mapping.txt and again scan the input file from beginning looking for matches and replacing them.
The most important point is, each time the inner while loop will be operating on the input.txt which was modified in the previous iteration of inner while loop. This way, any successful Find and Replace operations would keep on getting saved in the input.txt file.
How do I do this?
Thanks.
First of all you should use lexical file handles, the three-parameter form of open, and always check the status to make sure that an open has succeeded (as you do with the mapping file but not the input file).
The solution you suggest, of rewinding to the start of the line before using print will not work because you cannot update part of a file unless your replacement data is exactly the same size as the data it is replacing. This will not generally be true in your situation.
There are a number of solutions to this, the first and simplest is to invert the loops and put the read loop for the mapping file inside the read loop for the input file. Your code would look like this:
use strict;
use warnings;
my ($mapping, $input) = #ARGV;
open my $infh, '<', $input or die "Unable to open '$input': $!";
while (my $line = <$input>) {
open my $mapfh, '<', $mapping or die "Unable to open '$mapping': $!";
while (<$mapfh>) {
chomp;
my ($key, $value) = split /=/;
$line =~ s/\Q$key/$value/g;
}
print $line;
}
but your output is sent to STDOUT and you will have to arrange the output to be saved to a file and renamed appropriately.
An alternative here is to use the -I command-line option, which forces a file to be renamed automatically, and a backup saved if required. Using a bare -I will modify the file in-place by deleting the old file and renaming the new output, while giving the parameter a value like -I.bak will rename the old file by appending .bak instead of deleting it. The -I option applies only to files read from ARGV using an empty <> operator, and setting the built-in variable $^I to a value (or to the empty string '') has the same effect. The code looks like this:
use strict;
use warnings;
my $mapping = shift #ARGV;
$^I = '.bak';
while (my $line = <>) {
open my $mapfh, '<', $mapping or die "Unable to open '$mapping': $!";
while (<$mapfh>) {
chomp;
my ($key, $value) = split /=/;
$line =~ s/\Q$key/$value/g;
}
print $line;
}
A third, and neater alternative is to use Tie::File, which maps a Perl array to the file contents and reflects all modifications of the array back to the original file. This is an example:
use strict;
use warnings;
use Tie::File;
my ($mapping, $input) = #ARGV;
tie my #input, 'Tie::File', $input or die "Unable to open '$input': $!";
for my $line (#input) {
open my $mapfh, '<', $mapping or die "Unable to open '$mapping': $!";
while (<$mapfh>) {
chomp;
my ($key, $value) = split /=/;
$line =~ s/\Q$key/$value/g;
}
}
Finally, it is highly inefficient to keep opening and reading the mapping file for every line of input, and it is best to build a regex from its contents and use it throughout the program. This version first builds a hash %mapping from the mapping file and then creates a regex by applying quotemeta to each hash key to escape any regex metacharacters, and then joining them with the regex alternation operator |. The keys are sorted by descending length so that the longest matches are found and replaced in priority over the shorter ones.
use strict;
use warnings;
use Tie::File;
my ($mapping, $input) = #ARGV;
open my $mapfh, '<', $mapping or die "Unable to open '$mapping': $!";
my %mapping = map { chomp; /\S/ ? split /=/ : () } <$mapfh>;
my $regex = join '|', map quotemeta, sort { length $b <=> length $b } keys %mapping;
tie my #input, 'Tie::File', $input or die "Unable to open '$input': $!";
for my $line (#input) {
$line =~ s/($regex)/$mapping{$1}/g;
}
If I could move the file pointer to the start of the line, then I can overwrite with:
print INPUT $_,"\n"
Your premise is wrong: Assuming the byte sequence 00 01 02 and the rule 01 = A1 A2, the resulting byte sequence would be 00 A1 A2 and not 00 A1 A2 02. Ways around this include:
Use the Tie::File module.
Write to another file, and rename the second file to the original, once your pass is complete. This is probably most efficient and scalable.
seeking is not a good idea: You would be constrained to fix-length substitutions, and seek and tell operate on bytes, not characters. If you really have to use in-place editing, you could use this loop:
my $beginning_of_line = tell $fh;
while (<$fh>) {
# do processing
seek $fh, $beginning_of_line, 0;
# do update
} continue {$beginning_of_line = tell $fh}
Also, you make several passes over the input file. Assuming the token sequence a b c and the rules b = d e and d = f, you would produce the sequences a f e c or a d e c depending on the order of the rules! This may not be what you want.
Also, consider the ambiguity between the rules a = c and a b = d over the input a b. Does this produce c b or d?

Using Perl to parse a CSV file from a particular row to the end of the file

am very new to Perl and need your help
I have a CSV file xyz.csv with contents:
here level1 and er values are strings names...not numbers...
level1,er
level2,er2
level3,er3
level4,er4
I parse this CSV file using the script below and pass the fields to an array in the first run
open(my $d, '<', $file) or die "Could not open '$file' $!\n";
while (my $line = <$d>) {
chomp $line;
my #data = split "," , $line;
#XYX = ( [ "$data[0]", "$data[1]" ], );
}
For the second run I take an input from a command prompt and store in variable $val. My program should parse the CSV file from the value stored in variable until it reaches the end of the file
For example
I input level2 so I need a script to parse from the second line to the end of the CSV file, ignoring the values before level2 in the file, and pass these values (level2 to level4) to the #XYX = (["$data[1]","$data[1]"],);}
level2,er2
level3,er3
level4,er4
I input level3 so I need a script to parse from the third line to the end of the CSV file, ignoring the values before level3 in the file, and pass these values (level3 and level4) to the #XYX = (["$data[0]","$data[1]"],);}
level3,er3
level4,er4
How do I achieve that? Please do give your valuable suggestions. I appreciate your help
As long as you are certain that there are never any commas in the data you should be OK using split. But even so it would be wise to limit the split to two fields, so that you get everything up to the first comma and everything after it
There are a few issues with your code. First of all I hope you are putting use strict and use warnings at the top of all your Perl programs. That simple measure will catch many trivial problems that you could otherwise overlook, and so it is especially important before you ask for help with your code
It isn't commonly known, but putting a newline "\n" at the end of your die string prevent Perl from giving file and line number details in the output of where the error occurred. While this may be what you want, it is usually more helpful to be given the extra information
Your variable names are verly unhelpful, and by convention Perl variables consist of lower-case alphanumerics and underscores. Names like #XYX and $W don't help me understand your code at all!
Rather than splitting to an array, it looks like you would be better off putting the two fields into two scalar variables to avoid all that indexing. And I am not sure what you intend by #XYX = (["$data[1]","$data[1]"],). First of all do you really mean to use $data[1] twice? Secondly, your should never put scalar variables inside double quotes, as it does something very specific, and unless you know what that is you should avoid it. Finally, did you mean to push an anonymous array onto #XYX each time around the loop? Otherwise the contents of the array will be overwritten each time a line is read from the file, and the earlier data will be lost
This program uses a regular expression to extract $level_num from the first field. All it does it find the first sequence of digits in the string, which can then be compared to the minimum required level $min_level to decide whether a line from the log is relevant
use strict;
use warnings;
my $file = 'xyz.csv';
my $min_level = 3;
my #list;
open my $fh, '<', $file or die "Could not open '$file' $!";
while (my $line = <$fh>) {
chomp $line;
my ($level, $error) = split ',', $line, 2;
my ($level_num) = $level =~ /(\d+)/;
next unless $level_num >= $min_level;
push #list, [ $level, $error ];
}
For deciding which records to process you can use the "flip-flop" operator (..) along these lines.
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
my $level = shift || 'level1';
while (<DATA>) {
if (/^\Q$level,/ .. 0) {
print;
}
}
__DATA__
level1,er
level2,er2
level3,er3
level4,er4
The flip-flop operator returns false until its first operand is true. At that point it returns false until its second operand is true; at which point it returns false again.
I'm assuming that your file is ordered so that once you start to process it, you never want to stop. That means that the first operand to the flip-flop can be /^\Q$level,/ (match the string $level at the start of the line) and the second operand can just be zero (as we never want it to stop processing).
I'd also strongly recommend not parsing CSV records using split /,/. That may work on your current data but, in general, the fields in a CSV file are allowed to contain embedded commas which will break this approach. Instead, have a look at Text::CSV or Text::ParseWords (which is included with the standard Perl distribution).
Update: I seem to have got a couple of downvotes on this. It would be great if people would take the time to explain why.
#!/usr/bin/perl
use strict;
use warnings;
use Text::CSV;
my #XYZ;
my $file = 'xyz.csv';
open my $fh, '<', $file or die "$file: $!\n";
my $level = shift; # get level from commandline
my $getall = not defined $level; # true if level not given on commandline
my $parser = Text::CSV->new({ binary => 1 }); # object for parsing lines of CSV
while (my $row = $parser->getline($fh)) # $row is an array reference containing cells from a line of CSV
{
if ($getall # if level was not given on commandline, then put all rows into #XYZ
or # if level *was* given on commandline, then...
$row->[0] eq $level .. 0 # ...wait until the first cell in a row equals $level, then put that row and all subsequent rows into #XYZ
)
{
push #XYZ, $row;
}
}
close $fh;
#!/usr/bin/perl
use strict;
use warnings;
open(my $data, '<', $file) or die "Could not open '$file' $!\n";
my $level = shift ||"level1";
while (my $line = <$data>) {
chomp $line;
my #fields = split "," , $line;
if($fields[0] eq $level .. 0){
print "\n$fields[0]\n";
print "$fields[1]\n";
}}
This worked....thanks ALL for your help...

In Perl, how can I read the contents of a file into an array? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Easiest way to open a text file and read it into an array with Perl
I'm new to Perl and want for each file push the contents of that file in one separate array, I managed to do so by the following, which uses if statements. But, I want something like $1 for my arrays. Is that possible?
#!/usr/bin/perl
use strict;
my #karray;
my #sarray;
my #testarr = (#sarray,#karray);
my $stemplate = "foo.txt";
my $ktemplate = "bar.txt";
sub pushf2a {
open(IN, "<$_[0]") || die;
while (<IN>) {
if ($_[0] eq $stemplate) {
push (#sarray,$_);
} else {
push (#karray,$_);
}
}
close(IN) || die $!;
}
&pushf2a($stemplate,#sarray);
&pushf2a($ktemplate,#karray);
print sort #sarray;
print sort #karray;
I want something like this:
#!/bin/sh
myfoo=(#s,#k)
barf() {
pushtoarray $1
}
barf #s
barf #k
If you are going to slurp a file, use File::Slurp:
use File::Slurp;
my #lines = read_file 'filename';
Firstly, you can't call an array $1 in Perl, as that (and all the other scalars with a number as their name) are used by the regex engine and so can get overwritten whenever a regex match is run.
Secondly, you can read a file into an array much more easily than that: just use the diamond operator in list context.
open my $file, '<', $filename or die $!;
my #array = <$file>;
close $file;
You then get an array of the lines of the file, as split by the current line separator which is by default what you might expect it to be i.e. your platform's newline.
Thirdly, your pushf2a sub is rather strange, especially passing in an array and then not using it. You could write a subroutine which takes a filename and returns an array, and thus avoid your issue with the internal if statements:
sub f2a {
open my $file, '<', $_[0] or die $!;
<$file>;
# $file closes here as it goes out of scope
}
my #sarray = f2a($stemplate);
my #karray = f2a($ktemplate);
Overall I'm unsure exactly what the best solution is as I can't quite make out exactly what you want to do, but maybe this will help you out.
don't understand, what you want like $1 for arrays, but good practice is this code:
i contain files and their content in HoA - hash of arrays
my $main_file = qq(container.txt); #contains all names of your files.
my $fh; #filehandler of main file
open $fh, "<", $main_file or die "something wrong with your main file! check it!\n";
my %hash; # this hash for containing all files
while(<$fh>){
my $tmp_fh; # will use it for files in main file
#$_ contain next name of file you want to push into array
open $tmp_fh, "<", $_ or next; #next? maybe die, don't bother about it
$hash{$_}=[<$tmp_fh>];
#close $tmp_fh; #it will close automatically
}
close $fh;