Perl program skipping first line in csv file - perl

I am trying to understand an error with a PERL program. I have a comma-separated file, and I want to extract the contents of each row to a separate text file, using the contents of the first field in each row as the file name.
The program below does exactly this EXCEPT it skips the first line of the csv file. I tried to nail down the source of the error by adding a couple of print commands. The print command on line 22 shows that the first line is read by the command in line 21. But, once the foreach loop starts, the first line is not printed.
I'm not quite sure of the problem. I appreciate any help!
#!/usr/bin/perl
# script that takes a .csv file (such as that exported from Excel) and
# extracts the contents of each row into a separate text file, using the first column as the filename
# original source: http://www.tek-tips.com/viewthread.cfm?qid=1516940
# modified 3/14/12
# usage = ./export_rows.pl <yourfilename>.csv
use warnings;
use strict;
use Text::CSV_XS;
use Tie::Handle::CSV;
unless(#ARGV) {
print "Please supply a .csv file at the command line! For example, export_rows.pl myfile.csv\n";
exit;
}
my $fh = Tie::Handle::CSV->new(file => $ARGV[0],
header => 0);
my #headers = #{scalar <$fh>};
print "$headers[0]\n\n";
foreach my $csv_line (<$fh>) {
print "$csv_line->[0]\n";
open OUT, "> $csv_line->[0].txt" or die "Could not open file $csv_line->[0].txt for output.\n$!";
for my $i (1..$#headers) {
print OUT "$csv_line->[$i]\n";
}
close OUT;
}
close $fh;

Try beginning at 0 in your for loop:
for my $i (1..$#headers)
Should be:
for my $i (0..$#headers)
EDIT:
To get the first line of the file you can use Tie::File
Here is sample code:
my #arr;
tie #arr, 'Tie::File', 'a.txt' or die $!;
my $first = $arr[0];
untie #arr;
print "$first\n";
This module is cool in that it allow you to access lines of a file via accessing indices in an array. If your file is big it is not incredibly efficient, but I think you can definitely use it here.

Related

How to add a next line inside a cell instead of next cell using EXCEL module in perl

I am learning EXCEL module in perl. I am reading a file which has 4 lines. When i use Excel module it is print to next cell. Is there any way to print in same cell.
I have also tried adding "^#" symbol at the end of line and it is adding in same cell with extra character
Thanks in advance
file 1:
line 1
line 2
line 3
line 4
use strict;
use warnings;
use Excel::Writer::XLSX;
my $WorkBook=Excel::Writer::XLSX->new('test.xls');
my $Worksheet=$WorkBook->add_worksheet();
my %data;
open (INPUT,"file");
my $line_no=1;
while (<INPUT>) {
$data{$line_no}=$_;
$line_no=$line_no+1;
}
foreach my $i (keys %data) {
$Worksheet->write("A$i",$data{$i});
}
All syntax is taken care and snippet of code pasted here.
If you want the whole file in one cell, you don't need to read it line by line. You can read the file in one go, then insert it into your worksheet. I added the format as suggested by the documentation. It might not be necessary ( I don't have EXCEL to test it, LibreOffice renders it fine without the format).
use strict;
use warnings;
use Excel::Writer::XLSX;
my $WorkBook=Excel::Writer::XLSX->new('test.xls');
my $Worksheet=$WorkBook->add_worksheet();
open (my $in,'<',"file.txt")or die "could not open file.txt: $!";
my $file_content = do { local $/; <$in> };
my $format = $WorkBook->add_format();
$format->set_text_wrap();
$Worksheet-> write("A1",$file_content,$format);
In case you want to add only lines selected by line numbers, you can read the lines into an array. The <> operator returns a list of lines when used in list context. I prepended an element to #lines here, so your line numbers start with 1. Then use an array slice and join it:
use strict;
use warnings;
use Excel::Writer::XLSX;
my #line_numbers = (4,5);
my $WorkBook=Excel::Writer::XLSX->new('test.xls');
my $Worksheet=$WorkBook->add_worksheet();
open (my $in,'<',"file.txt")or die "could not open file.txt: $!";
my #lines = ('', <$in>);
my $content = join ('',#lines[#line_numbers]);
my $format = $WorkBook->add_format();
$format->set_text_wrap();
$Worksheet-> write("A1",$content,$format);

Read data from each column and finally print it in a row in perl

I'm writing a script to read a large file(>10 GB) and write the data from an array to the end of each line in that file. Here is my code
my $count=0;
while(my $lines = <$FILE>){
seek $FILE, length($lines), 1;
print $FILE "\t", $array[$count];
$count++;
}
But I think I'm wrong in finding the end-of-line using seek method. I couldn't get my head around this. Can anyone please see whats wrong in this code. before processing..
my 1st line
my 2nd line
my 3rd line
After processing....
my 1st line data1
my 2nd line data2
my 3rd line data3
data1,data2,data3 are in the #array.
Details on the code:
FILE is opened in +< mode (read/write)
FILE lines are tab delimited.
#array holds the data1,2...
Issues:
Moving the pointer to end of each line
Thanks,
Robin
You can't do that. Seeking to a location in a file and then printing to it overwrites the data at that position.
I suggest you use Tie::File, which lets you access the contents of a file as an array, so appending to the end of a line of the file is done by simply adding a string to one of the elements of the array.
The code would look like this. Note that the line that creates #newdata is there just for testing. It creates an array that is the same length as the file, with lines like data1, data2 etc. as you have in your question.
You should test this on a smaller file initially as it will take a while to process the 15GB file, and it also overwrites it, so if you have any bugs you will destroy your data.
use strict;
use warnings;
use Tie::File;
use Fcntl 'O_RDWR';
tie my #file, 'Tie::File', 'myfile', mode => O_RDWR or die $!;
my #newdata = map sprintf('data%d', $_ + 1), 0 .. $#file;
my $i = 0;
for my $line (#file) {
$line .= "\t" . $newdata[$i];
++$i;
}
untie #file;

I want to replace a sequence name in fasta file with another name

I have one fasta file and one text file fasta file contains sequences in fasta format and text file contains name of genes now I want to replace name of the sequences in fasta file after '>' sign with the gene names in text file
I am new to perl though I have written a script but I don't know why its not working can anyone help me on that please
following is my script:
print"Enter annotated file...";
$f1=<STDIN>;
print"Enter sequence file...";
$f2=<STDIN>;
open(FILE1,$f1) || die"Can't open $f1";
#annotfile=<FILE1>;
open(FILE2,$f2) || die"Can't open $f2";
#seqfile=<FILE2>;
#d=split('\t',#annotfile[0]);
for($i=0;$i<scalar(#annotfile);$i++)
{
#curr_all=split('\t',#annotfile[$i]);
#curr_id[$i]=#curr_all[0];
#gene_nm[$i]=#curr_all[1];
}
for($j=0;$j<scalar(#seqfile);$j++)
{
$id=#curr_id[$j];
$gene=#gene_nm[$j];
#seqfile[$j]=~s/$id[$j]/$gene[$j]/g;
print #seqfile[$j];
}
my files looks like following:
annot.txt
pool75_contig_389 ubiquitin ligase e3a
pool75_contig_704 tumor susceptibility
pool75_contig_1977 serine threonine-protein phosphatase 4 catalytic subunit
pool75_contig_3064 bardet-biedl syndrome 2 protein P
pool75_contig_2499 succinyl- ligase
goat300.fasta
goat300.fasta
>pool75_contig_704
CCCTTTCTCCCTTCCCAACATTCAGAGATACTGAATCGAAACTCTTACTGTCTGTTAGAT
GACAAAGAGTTATCCATCCTACATACTCCAATTTCCTTCCGCAACTTGTGATTTCGCCGC
TTGAATCTTGACGCCGTGCGTCCACAGTTTGTTGTGTTTTATCAATCAAGGTCATTATCA
ACCGAAGACGCTATCTATTTTCTTGGCGAAGCTCTCGGAAAGGAGCCATCGAAATGGAAG
TATTTCTCAAGAAAGTCCGCGAGTTATCCCGGAAGCAGTTC
>pool75_contig_389
GACCTATACCGGACCGTCACTGAAAGNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
ACGATCCAGGCATGGAGTTGTGGTGACGAGTAGGAGGGTCACCGTGGTGAGCGGGAAGCC
TCGGGCGTGAGCCTGGGTGGAGCCGCCACGGGTGCAGATCTTGGTGGTAGTAGCAAATAT
TCAAGTGAGAACCTTGAAGGCCGAGGTGGAGAAGGNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNTCATTTGTAT
CGCCCGGAAAACGTCACAAGAACGGGAGTTGCGTACAGAA
>pool75_contig_1977
AAGGGACACCGTTGGGTGAGGCGAGCTGCGTTCCTCGAACCATGGCTTCAAAAAGCGACT
TAGACCGTCAGATTGAACAGCTCAGGGCCTGCAAGCTCATTACAGAGGATGAGGTTAAGG
CACTCTGCGCTAAGGCGCGTGAGATTTTAATTGAAGAGAGTAATGTCCAGTGCGTGGACT
CACCTGTCACGGTTTGTGGCGATATCCACGGCCAGTTTTACGACTTGATTGAACTGTTTA
AAGTGGGCGGAGATGTTC
>pool75_contig_3064
TTACTATTTCTGGGCCTTAAGACTGGCTTAGTCGCTTACGACCCTTATAACAATGTAGAT
GTATATTATAAGGATCTTCCTGATGGTGCTAACGCTATGTTAATTTATTCAAACTCACCG
ACAAAGGAACAGAATATGCTTTGGCAGGTGGAAACTGTTCGATAATTGGATTGAACGACG
GCGGATGCGAGGTATTTTGGACAGTCACTGGCGACTCCGTTTGCTCTCTTTGCTCGATTA
AATCCGACAGCGATAAGTCAAGAGATTTTGTGGTTGGCTCTGAAGATTTTGACATCCGAA
TCTTCCATGGGGATGCCATAATATATGAAATCACGGAGTCTGATG
>pool75_contig_2499
AAGAGAAGAGGTGAGTTTGAGTATTGTTTGTGTGTGTGTGGTTGGGTGAGTGTGTGGTAT
GTGGTGTATGTGTGTGATGAATGTATGTGAAAGAGAGTGATGAATCTCATGGATATGTTC
GAGTTCGTGGTTTCCATTGATCGGTTATAGCCGAGATGATGGATGTGTTCCATGTGTCTG
ATTTCAGTTTAGGATTGTGTTGATGATGTTGATGATGAAAATTGTTGATGGTGATGACGA
TAGTGATGATGATGACGATGTTTCGGATAATGGTGATGATGATGATGGTTCCGACGATGA
TGTTTCGCTTGATGATGGTGATAATGATGACTCCGAAAATAACGTTGACTCGGATGAG
Consider using Bio::SeqIO to parse your Fasta dataset, instead of doing it yourself. Bio::SeqIO lives for this task, and is well developed for it. Additionally, if you're in bioinformatics, it would serve you well to get to know Bio::SeqIO. Given this, consider the following:
use strict;
use warnings;
use Bio::SeqIO;
open my $fh, '<', 'annot.txt' or die $!;
my %annot = map { /(\S+)\s+(.+)/; $1 => $2 } <$fh>;
close $fh;
my $in = Bio::SeqIO->new( -file => 'goat300.fasta', -format => 'Fasta' );
while ( my $seq = $in->next_seq() ) {
my $seqID = $annot{ $seq->id } // $seq->id;
print "$seqID\n" . $seq->seq . "\n";
}
Output on your datasets:
tumor susceptibility
CCCTTTCTCCCTTCCCAACATTCAGAGATACTGAATCGAAACTCTTACTGTCTGTTAGATGACAAAGAGTTATCCATCCTACATACTCCAATTTCCTTCCGCAACTTGTGATTTCGCCGCTTGAATCTTGACGCCGTGCGTCCACAGTTTGTTGTGTTTTATCAATCAAGGTCATTATCAACCGAAGACGCTATCTATTTTCTTGGCGAAGCTCTCGGAAAGGAGCCATCGAAATGGAAGTATTTCTCAAGAAAGTCCGCGAGTTATCCCGGAAGCAGTTC
ubiquitin ligase e3a
GACCTATACCGGACCGTCACTGAAAGNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNACGATCCAGGCATGGAGTTGTGGTGACGAGTAGGAGGGTCACCGTGGTGAGCGGGAAGCCTCGGGCGTGAGCCTGGGTGGAGCCGCCACGGGTGCAGATCTTGGTGGTAGTAGCAAATATTCAAGTGAGAACCTTGAAGGCCGAGGTGGAGAAGGNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNTCATTTGTATCGCCCGGAAAACGTCACAAGAACGGGAGTTGCGTACAGAA
serine threonine-protein phosphatase 4 catalytic subunit
AAGGGACACCGTTGGGTGAGGCGAGCTGCGTTCCTCGAACCATGGCTTCAAAAAGCGACTTAGACCGTCAGATTGAACAGCTCAGGGCCTGCAAGCTCATTACAGAGGATGAGGTTAAGGCACTCTGCGCTAAGGCGCGTGAGATTTTAATTGAAGAGAGTAATGTCCAGTGCGTGGACTCACCTGTCACGGTTTGTGGCGATATCCACGGCCAGTTTTACGACTTGATTGAACTGTTTAAAGTGGGCGGAGATGTTC
bardet-biedl syndrome 2 protein P
TTACTATTTCTGGGCCTTAAGACTGGCTTAGTCGCTTACGACCCTTATAACAATGTAGATGTATATTATAAGGATCTTCCTGATGGTGCTAACGCTATGTTAATTTATTCAAACTCACCGACAAAGGAACAGAATATGCTTTGGCAGGTGGAAACTGTTCGATAATTGGATTGAACGACGGCGGATGCGAGGTATTTTGGACAGTCACTGGCGACTCCGTTTGCTCTCTTTGCTCGATTAAATCCGACAGCGATAAGTCAAGAGATTTTGTGGTTGGCTCTGAAGATTTTGACATCCGAATCTTCCATGGGGATGCCATAATATATGAAATCACGGAGTCTGATG
succinyl- ligase
AAGAGAAGAGGTGAGTTTGAGTATTGTTTGTGTGTGTGTGGTTGGGTGAGTGTGTGGTATGTGGTGTATGTGTGTGATGAATGTATGTGAAAGAGAGTGATGAATCTCATGGATATGTTCGAGTTCGTGGTTTCCATTGATCGGTTATAGCCGAGATGATGGATGTGTTCCATGTGTCTGATTTCAGTTTAGGATTGTGTTGATGATGTTGATGATGAAAATTGTTGATGGTGATGACGATAGTGATGATGATGACGATGTTTCGGATAATGGTGATGATGATGATGGTTCCGACGATGATGTTTCGCTTGATGATGGTGATAATGATGACTCCGAAAATAACGTTGACTCGGATGAG
The hash %annot is initialized by reading and capturing the contents of your annot.txt data. A Bio::SeqIO object is created using your goat300.fasta file data. The while loop iterates through your fasta sequences. The variable $seqID either takes the associated value of the key in the %annot hash or it keeps the current sequence ID (the // notation means defined or, so that insures $seqID will be defined). Finally, the Fasta record is printed.
Hope this helps!
There were a lot of warnings in your code, and your approach was inefficient. Let me first show you a working Perl program. I'll explain afterwards.
#!/usr/bin/perl
use strict;
use warnings;
# Read the annotations file
print"Enter annotated file...\n";
# my $f1 = <STDIN>;
my $f1 = 'annot.txt';
open(my $fh_annotations, '<', $f1) or die "Can't open $f1";
my #annotfile = <$fh_annotations>;
close $fh_annotations;
# Read the sequence file
print"Enter sequence file...\n";
# my $f2 = <STDIN>;
my $f2 = 'goat300.fasta';
open(my $fh_genes, '<', $f2) or die "Can't open $f2";
my #seqfile = <$fh_genes>;
close $fh_genes;
# Process the annotations data
my %names; # this hash is going to hold the names
foreach my $line (#annotfile) {
chomp $line; # remove newline
my #fields = split /\t/, $line; # split into array
$names{$fields[0]} = $fields[1]; # save in the hash as key->value pair
}
# Process the sequence data
foreach my $line (#seqfile) {
# Look at each line
if ($line =~ m/>(.+)$/) {
# If there is a heading there, remember it...
if (exists $names{$1}) {
# ... check if we know a name for it and replace it in the line
$line =~ s/($1)/$names{$1}/;
}
}
# output the line (this would be done to another filehandle)
print $line;
}
This reads both files and saves them in memory, just like yours did. But instead of trying to build two arrays for the names, I went with a hash, which is a key/value pair. Think of it like an array with names instead of numbers and no particular sorting.
Once these names are set up, I can process the sequence file. I simply look at each line and check if there is a heading there, by looking for the > sign. If it's there (it goes into $1 because of the parenthesis), I look if we have a hash entry (with exists) in our %names hash. If we do, we can replace the heading with the proper name.
After that, we could write it out to a new file. I'm just printing it.
I've used a few other techniques. Unfortunately the literature people get in a BioPerl context is quite outdated. Please take this advice, it will make your live easier.
Always use strict and warnings. They will tell you about problems with your code.
Always declare your variables with my. This is not like other languages, where you need to set up a variable at the top of your problem. You can declare it where you need it. The vars only live in a certain scope, which means between the nearest enclosing { and } brackets, or block.
Use three-argument open and lexical file handles for security. Read more here.
Perl offers foreach as an alternative to the C for loop. In this case, it made things a lot easier.
One more thing about this program: While this example data was rather short, I believe your actual data might be a lot larger. Consider processing the sequence file while you read it so you do not run out of memory. There's no need to save all the lines, unless you want to do something else with them.
open my $fh_out, '>', $filename_out or die $!;
open my $fh_in, '<', $filename_in or die $!;
while (my $line = <$fh_in>) {
# do stuff with the line, like your regex
print $fh_out $line;
}
close $fh_in;
close $fh_out;

perl print only the last line of the array

I am trying to print the array but the out put contain only the last line of the array. the partial code is as follow.
open OUT, "> /myFile.txt"
or die "Couldn't open output file: $!";
foreach (#result) {
print OUT;
}
the out put is
List Z
which is the last line, but when I do print "#result" the out put is
List A
List B
List C so on...
I am little bit confuse why the results are different on the same array.
Working on a hunch, I tried adding \r to the end of your input lines, and sure enough, it creates the illusion that only the last line of your input is printed to the file. Here's the code to test it:
use strict;
use warnings;
my #result = map "$_\r", 'A' .. 'Z';
open (OUT, "> myFile.txt") or die("Couldn't open output file: $!");
foreach (#result) {
print OUT ;
}
What you have probably done is performed chomp on lines from a file from a different operating system (DOS, Windows), which does not strip the \r line endings. Hence, when the lines are printed, the lines overwrite each other.
If this is what is wrong, the solution is to use the dos2unix tool to fix your files, or to use:
s/\s+\z//;
to strip your newlines.
You may inspect your input by using the Data::Dumper module, using the option Useqq, e.g.:
use Data::Dumper;
$Data::Dumper::Useqq = 1;
print Dumper \#result;
If these whitespace characters are in your output, they will then be visible.
the problem is here
open OUT, "> /myFile.txt"
this should be
open OUT, ">>", "/myfile.txt"
What you wrote overwrites the entire file for each iteration of the foreach(#result) loop.
What you are intending to do is append to it (">>").
">>" appends, ">" overwrites.
Also take note of how i broke ">> /myfile.txt" into ">>", "/myfile.txt".
This is both more secure, and more robust for less specific applications of open.
Foreign line terminators from any platform can easily be fixed by clearing whitespace from the end of the line and adding it back when printing it
Like this
open my $out, '>', '/myFile.txt' or die "Couldn't open output file: $!";
foreach (#result) {
s/\s+$//;
print $out "$_\n";
}
or
foreach my $line (#result) {
$line =~ s/\s+$//;
print $out "$line\n";
}

How do I copy a CSV file, but skip the first line?

I want to write a script that takes a CSV file, deletes its first row and creates a new output csv file.
This is my code:
use Text::CSV_XS;
use strict;
use warnings;
my $csv = Text::CSV_XS->new({sep_char => ','});
my $file = $ARGV[0];
open(my $data, '<', $file) or die "Could not open '$file'\n";
my $csvout = Text::CSV_XS->new({binary => 1, eol => $/});
open my $OUTPUT, '>', "file.csv" or die "Can't able to open file.csv\n";
my $tmp = 0;
while (my $line = <$data>) {
# if ($tmp==0)
# {
# $tmp=1;
# next;
# }
chomp $line;
if ($csv->parse($line)) {
my #fields = $csv->fields();
$csvout->print($OUTPUT, \#fields);
} else {
warn "Line could not be parsed: $line\n";
}
}
On the perl command line I write: c:\test.pl csv.csv and it doesn't create the file.csv output, but when I double click the script it creates a blank CSV file. What am I doing wrong?
Your program isn't ideally written, but I can't tell why it doesn't work if you pass the CSV file on the command line as you have described. Do you get the errors Could not open 'csv.csv' or Can't able to open file.csv? If not then the file must be created in your current directory. Perhaps you are looking in the wrong place?
If all you need to do is to drop the first line then there is no need to use a module to process the CSV data - you can handle it as a simple text file.
If the file is specified on the command line, as in c:\test.pl csv.csv, you can read from it without explicitly opening it using the <> operator.
This program reads the lines from the input file and prints them to the output only if the line counter (the $. variable) isn't equal to one).
use strict;
use warnings;
open my $out, '>', 'file.csv' or die $!;
while (my $line = <>) {
print $out $line unless $. == 1;
}
Yhm.. you don't need any modules for this task, since CSV ( comma separated value ) are simply text files - just open file, and iterate over its lines ( write to output all lines except particular number, e.g. first ). Such task ( skip first line ) is so simple, that it would be probably better to do it with command line one-liner than a dedicated script.
quick search - see e.g. this link for an example, there are numerous tutorials about perl input/output operations
http://learn.perl.org/examples/read_write_file.html
PS. Perl scripts ( programs ) usually are not "compiled" into binary file - they are of course "compiled", but, uhm, on the fly - that's why /usr/bin/perl is called rather "interpreter" than "compiler" like gcc or g++. I guess what you're looking for is some editor with syntax highlighting and other development goods - you probably could try Eclipse with perl plugin for that ( cross platform ).
http://www.eclipse.org/downloads/
http://www.epic-ide.org/download.php/
this
user#localhost:~$ cat blabla.csv | perl -ne 'print $_ if $x++; '
skips first line ( prints out only if variable incremented AFTER each use of it is more than zero )
You are missing your first (and only) argument due to Windows.
I think this question will help you: #ARGV is empty using ActivePerl in Windows 7