replace NULL token in Hive CLI output - perl

I am trying to deliver the output of Hive CLI to my (closed source) application, and want to replace all "NULL" tokens with empty string. This is because Hive returns NULL even for numeric fields which the application raises exceptions for. I thought this should be a simple sed, or perl regex, but cant solve the problem so far.
Here's an example of the data record -
NULL<TAB>NULL<TAB>NULL<TAB>NULL<TAB>NULL<TAB>NULL<TAB>NULL<TAB>NULL<TAB>2015-02-08
The perl code I tried is
my %replace = (
"\tNULL\t" => "b",
"^NULL\t" => "a",
"\tNULL\$" => "c"
);
my $regex = join "|", keys %replace;
#$regex = qr/$regex/;
my $filename = hout;
open(my $fh, '<:encoding(UTF-8)', $filename)
or die "Could not open file '$filename' $!";
while (my $row = <$fh>) {
chomp $row;
$row =~ s/($regex)/$replace{$1}/g;
print "$row\n";
}
This is the output I get -
NULLbNULLbNULLbNULL<TAB>2015-02-08
In other words, in a stream of 'fields' delimited by a 'character', I want to replace any field that is equal to the string "NULL" with an empty string, so the delimiters surrounding the field (or start of line + delimiter, or delimiter + end of line) become adjacent.
Any guidance would be much appreciated! Thanks!
P.S. I dont need a perl solution per se; just any terse solution would be awesome (I tried sed as well with similar results)

The root of your problem here is that your patterns overlap. You have a delimiter either side of your 'NULL' which you then modify and move on.
So something like this:
my $string = "NULL\tNULL\tNULL\tsome_value\tNULL\n";
print $string;
$string =~ s/(\A|\t)NULL(\t|\Z)/$1$2/g;
print $string;
$string =~ s/(\A|\t)NULL(\t|\Z)/$1$2/g;
print $string;
You need two passes to process it, because the pattern 'grabs' too much for the next iteration to match.
So with reference to: Matching two overlapping patterns with Perl
What you probably need is:
$string =~ s/(\A|\t)NULL(?=(\t|\Z))/$1/g;
You can use the same model if you're wanting to apply it to separate patterns.

I do this a lot creating output files from hive cli, it may be "simple" but I just pipe my output though this sed string:
hive -f test.hql | sed 's/NULL//g' > test.out
Works fine for me.

Related

Perl with FASTA sequence extraction has problems (only) with first sequence

I am using a function/subroutine extract_seq available on internet to extract sequences in FASTA files. Briefly:
A sequence begins with first line identified by '>', followed by ID and other information separated by spaces
Subsequent lines (not beginning with '>' have multiple strings
A FASTA file can have 1 or more sequences
Bug is that the output has additional '>' character for first sequence (only) causing consistency problems.
Program works fine in extracting sequences based on ID except for additional '>' in case of first sequence. Could you please suggest a solution as well as reason for the bug? A simple regex would fix the problem but I do not feel good about fixing bugs that I cannot understand.
The Perl script is:
#!/usr/bin/perl -w
use strict;
my $seq_all = "seq_all.fa"; # all proteins in fasta format
foreach my $q_seq ("A0A1D8PC43","A0A1D8PC38") {
print "Querying $q_seq\n";
&extract_seq($seq_all, $q_seq);
}
exit 0;
sub extract_seq
{
open(my $fh, ">query.seq");
my $seq_all = $_[0];
my $lookup = $_[1];
local $/ = "\n>";
#ARGV = ($seq_all);
while (my $seq = <>) {
chomp $seq;
my ($id) = $seq =~ /^>*(\S+)/;
if ($id eq $lookup) {
print "$seq\n";
last;
}
}
}
The FASTA file is:
>A0A1D8PC43 A0A1D8PC43_CANAL Diphosphomevalonate decarboxylase
MYSASVTAPVNIATLKYWGKRDKSLNLPTNSSISVTLSQDDLRTLTTASASESFEKDQLW
LNGKLESLDTPRTQACLADLRKLRASIEQSPDTPKLSQMKLHIVSENNFPTAAGLASSAA
GFAALVSAIAKLYELPQDMSELSKIARKGSGSACRSLFGGFVAWEMGTLPDGQDSKAVEI
APLEHWPSLRAVILVVSDDKKDTPSTTGMQSTVATSDLFAHRIAEVVPQRFEAMKKAILD
KDFPKFAELTMKDSNSFHAVCLDSYPPIFYLNDTSKKIIKMVETINQQEVVAAYTFDAGP
NAVIYYDEANQDKVLSLLYKHFGHVPGWKTHYTAETPVAGVSRIIQTSIGPGPQETSESL
TK
>A0A1D8PC56 A0A1D8PC56_CANAL Uncharacterized protein OS=Candida
MSDTKKTTETDSEVGYLDIYLRFNDDMEKDYCFQVKTTTVFKDLYKVFRTLPISLRPSVF
YHAQPIGFKKSVSPGYLTQDGNFIFDEDSQKQAVPVNDNDLINETVWPGQLILPVWQFND
FGFYSFLAFLACWLYTDLPDFISPTPGICLTNQMTKLMAWVLVQFGKDRFAETLLADLYD
TVGVGAQCVFFGFHIIKCLFIFGFLYTGVFNPMRVFRLTPRSVKLDVTKEELVKLGWTGT
RKATIDEYKEYYREFKINQHGGMIQAHRAGLFNTLRNLGVQLESGEGYNTPLTEENKLRT
MRQIVEDAKKPDFKLKLSYEYFAELGYVFATNAENKEGSELAQLIKQYRRYGLLVSDQRI
KTVVRARKGETDEEKPKVEEVVEE
>A0A1D8PC67 A0A1D8PC67_CANAL Bfa1p OS=Candida albicans (strain
MVSDKLTLLRQFSEEDELFGDIEGIDYHDGETLKINKFSFPSSASSPSFAITGQSPNMRS
INGKRITRETLSEYSEENETDLTSEFSDQEFEWDGFNKNQSIYQQMNQRLIATKVAKQRE
AEREQRELMQKRHKDYDPNQTLRLKDFNKLTNENLTLLDQLDDEKTVNYEYVRDDVEDFA
QGFDKDFETKLRIQPSMPTLRSNAPTLKKYKSYGEFKCDNRVKQKLDRIPSFYNKNQLLS
KFKETKSYHPHHKKMGTVRCLNNNSEVPVTYPSISNMKLNKEKNRWEGNDIDLIRFEKPS
LITHKENKTKKRQGNMVYDEQNLRWINIESEHDVFDDIPDLAVKQLQSPVRGLSQFTQRT
TSTTATATAPSKNNETQHSDFEISRKLVDKFQKEQAKIEKKINHWFIDTTSEFNTDHYWE
IRKMIIEE
>A0A1D8PC38 A0A1D8PC38_CANAL Cta2p OS=Candida albicans (strain
MPENLQTRLHNSLDEILKSSGYIFEVIDQNRKQSNVITSPNNELIQKSITQSLNGEIQNF
HAILDQTVSKLNDAEWCLGVMVEKKKKHDELKVKEEAARKKREEEAKKKEEEAKKKAEEA
KKKEEEAKKAEEAKKAEEAKKVEEAAKKAEEAKKAEEEARKKAETAPQKFDNFDDFIGFD
INDNTNDEDMLSNMDYEDLKLDDKVPATTDNNLDMNNILENDESILDGLNMTLLDNGDHV
NEEFDVDSFLNQFGN
Edit:
The problem, as explained above, I face is that the output has additional '>' character for first sequence (only). I do not see the reason for the same and this is causing a lot of trouble. Output is:
Querying A0A1D8PC43
>A0A1D8PC43 A0A1D8PC43_CANAL Diphosphomevalonate decarboxylase
MYSASVTAPVNIATLKYWGKRDKSLNLPTNSSISVTLSQDDLRTLTTASASESFEKDQLW
LNGKLESLDTPRTQACLADLRKLRASIEQSPDTPKLSQMKLHIVSENNFPTAAGLASSAA
GFAALVSAIAKLYELPQDMSELSKIARKGSGSACRSLFGGFVAWEMGTLPDGQDSKAVEI
APLEHWPSLRAVILVVSDDKKDTPSTTGMQSTVATSDLFAHRIAEVVPQRFEAMKKAILD
KDFPKFAELTMKDSNSFHAVCLDSYPPIFYLNDTSKKIIKMVETINQQEVVAAYTFDAGP
NAVIYYDEANQDKVLSLLYKHFGHVPGWKTHYTAETPVAGVSRIIQTSIGPGPQETSESL
TK
Querying A0A1D8PC38
A0A1D8PC38 A0A1D8PC38_CANAL Cta2p OS=Candida albicans (strain
MPENLQTRLHNSLDEILKSSGYIFEVIDQNRKQSNVITSPNNELIQKSITQSLNGEIQNF
HAILDQTVSKLNDAEWCLGVMVEKKKKHDELKVKEEAARKKREEEAKKKEEEAKKKAEEA
KKKEEEAKKAEEAKKAEEAKKVEEAAKKAEEAKKAEEEARKKAETAPQKFDNFDDFIGFD
INDNTNDEDMLSNMDYEDLKLDDKVPATTDNNLDMNNILENDESILDGLNMTLLDNGDHV
NEEFDVDSFLNQFGN
$/ is the input record separator, setting local $/="\n>"; effect is that input is split into record ending with \n>, after chomp, the ending is removed however />*(\S+)/ may not match because > is consumed from previous record.
from FASTA wikipedia a line beginning by > is a comment and may not always be an id. However in case it is always the case, following may fix.
my ($id,$seq) = $seq =~ /^>*(.*)\n(\S+)/;
You set the record separator to \n>. This does not apply to the first sequence.
Fixed code sequence:
...
chomp $seq;
# for first sequence
$seq =~ s/^>//;
my ($id) = $seq =~ /^(\S+)/;
if ($id eq $lookup) {
...
Please note that your implementation is extremely inefficient, because it reads & parses the file contents for each query. How about splitting loading/parsing and querying into separate functions?
Alternative solution: give the full list of lookup values to the loader. It would then fill an answer array as it encounters the matches during reading the file.

Save a row to csv format

I have a set of rows from a DB that I would like to save to a csv file.
Taking into account that the data are ascii chars without any weird chars would the following suffice?
my $csv_row = join( ', ', #$row );
# save csv_row to file
My concern is if that would create rows that would be acceptable as CSV by any tool and e.g not be concern with quoting etc.
Update:
Is there any difference with this?
my $csv = Text::CSV->new ( { binary => 1, eol => "\n"} );
my $header = join (',', qw( COL_NAME1 COL_NAME2 COL_NAME3 COL_NAME4 ) );
$csv->print( $fh, [$header] );
foreach my $row ( #data ) {
$csv->print($fh, $row );
}
This gives me as a first line:
" COL_NAME1,COL_NAME2,COL_NAME3,COL_NAME4"
Please notice the double quotes and the rest of the rows are without any quotes.
What is the difference than my plain join? Also do I need the binary set?
The safest way should be to write clean records with a comma separator. The simpler the better, specially with the format that has so much variation in real life. If needed, double quote each field.
The true strength in using the module is for reading of "real-life" data. But it makes perfect sense to use it for writing as well, for a uniform approach to CSV. Also, options can then be set in a clear way, and the module can iron out some glitches in data.
The Text::CSV documentation tells us about binary option
Important Note: The default behavior is to accept only ASCII characters in the range from 0x20 (space) to 0x7E (tilde). This means that the fields can not contain newlines. If your data contains newlines embedded in fields, or characters above 0x7E (tilde), or binary data, you must set binary => 1 in the call to new. To cover the widest range of parsing options, you will always want to set binary.
I'd say use it. Since you write a file this may be it for options, along with eol (or use say method). But do scan the many useful options and review their defaults.
As for your header, the print method expects an array reference where each field is an element, not a single string with comma-separated fields. So it is wrong to say
my $header = join (',', qw(COL_NAME1 COL_NAME2 COL_NAME3 COL_NAME4)); # WRONG
$csv->print( $fh, [$header] );
since the $header is a single string which is then made the sole element of the (anonymous) array reference created by [ ... ]. So it prints this string as the first field in the row, and since it detects in it the separator , itself it also double-quotes. Instead, you should have
$csv->print($fh, [COL_NAME1 COL_NAME2 COL_NAME3 COL_NAME4]);
or better assign column names to #header and then do $csv->print($fh, \#header).
This is also an example of why it is good to use the module for writing – if a comma slips into an element of the array, supposed to be a single field, it is handled correctly by double-quoting.
A complete example
use warnings;
use strict;
use Text::CSV_XS;
my $csv = Text::CSV->new ( { binary => 1, eol => "\n" } )
or die "Cannot use CSV: " . Text::CSV->error_diag();
my $file = 'output.csv';
open my $fh_out , '>', 'output.csv' or die "Can't open $file for writing: $!";
my #headers = qw( COL_NAME1 COL_NAME2 COL_NAME3 COL_NAME4 );
my #data = 1..4;
$csv->print($fh_out, \#headers);
$csv->print($fh_out, \#data);
close $fh_out;
what produces the file output.csv
COL_NAME1,COL_NAME2,COL_NAME3,COL_NAME4
1,2,3,4

Perl: Find a match, remove the same lines, and to get the last field

Being a Perl newbie, please pardon me for asking this basic question.
I have a text file #server1 that shows a bunch of sentences (white space is the field separator) on many lines in the file.
I needed to match lines with my keyword, remove the same lines, and extract only the last field, so I have tried with:
my #allmatchedlines;
open(output1, "ssh user1#server1 cat /tmp/myfile.txt |");
while(<output1>) {
chomp;
#allmatchedlines = $_ if /mysearch/;
}
close(output1);
my #uniqmatchedline = split(/ /, #allmatchedlines);
my $lastfield = $uniqmatchedline[-1]\n";
print "$lastfield\n";
and it gives me the output showing:
1
I don't know why it's giving me just "1".
Could someone please explain why I'm getting "1" and how I can get the last field of the matched line correctly?
Thank you!
my #uniqmatchedline = split(/ /, #allmatchedlines);
You're getting "1" because split takes a scalar, not an array. An array in scalar context returns the number of elements.
You need to split on each individual line. Something like this:
my #uniqmatchedline = map { split(/ /, $_) } #allmatchedlines;
There are two issues with your code:
split is expecting a scalar value (string) to split on; if you are passing an array, it will convert the array to scalar (which is just the array length)
You did not have a way to remove same lines
To address these, the following code should work (not tested as no data):
my #allmatchedlines;
open(output1, "ssh user1#server1 cat /tmp/myfile.txt |");
while(<output1>) {
chomp;
#allmatchedlines = $_ if /mysearch/;
}
close(output1);
my %existing;
my #uniqmatchedline = grep !$existing{$_}++, #allmatchedlines; #this will return the unique lines
my #lastfields = map { ((split / /, $_)[-1]) . "\n" } #uniqmatchedline ; #this maps the last field in each line into an array
print for #lastfields;
Apart from two errors in the code, I find the statement "remove the same lines and extract only the last field" unclear. Once duplicate matching lines are removed, there may still be multiple distinct sentences with the pattern.
Until a clarification comes, here is code that picks the last field from the last such sentence.
use warnings 'all';
use strict;
use List::MoreUtils qw(uniq)
my $file = '/tmp/myfile.txt';
my $cmd = "ssh user1\#server1 cat $file";
open my $fh, '-|', $cmd // die "Error opening $cmd: $!"; # /
while (<$fh>) {
chomp;
push #allmatchedlines, $_ if /mysearch/;
}
close(output1);
my #unique_matched_lines = uniq #allmatchedlines;
my $lastfield = ( split ' ', $unique_matched_lines[-1] )[-1];
print $lastfield, "\n";
I changed to the three-argument open, with error checking. Recall that open for a process involves a fork and returns pid, so an "error" doesn't at all relate to what happened with the command itself. See open. (The # / merely turns off wrong syntax highlighting.) Also note that # under "..." indicates an array and thus need be escaped.
The (default) pattern ' ' used in split splits on any amount of whitespace. The regex / / turns off this behavior and splits on a single space. You most likely want to use ' '.
For more comments please see the original post below.
The statement #allmatchedlines = $_ if /mysearch/; on every iteration assigns to the array, overwriting whatever has been in it. So you end up with only the last line that matched mysearch. You want push #allmatchedlines, $_ ... to get all those lines.
Also, as shown in the answer by Justin Schell, split needs a scalar so it is taking the length of #allmatchedlines – which is 1 as explained above. You should have
my #words_in_matched_lines = map { split } #allmatchedlines;
When all this is straightened out, you'll have words in the array #uniqmatchedline and if that is the intention then its name is misleading.
To get unique elements of the array you can use the module List::MoreUtils
use List::MoreUtils qw(uniq);
my #unique_elems = uniq #whole_array;

Data::Dumper wraps second word's output

I'm experiencing a rather odd problem while using Data::Dumper to try and check on my importing of a large list of data into a hash.
My Data looks like this in another file.
##Product ID => Market for product
ABC => Euro
XYZ => USA
PQR => India
Then in my script, I'm trying to read in my list of data into a hash like so:
open(CONFIG_DAT_H, "<", $config_data);
while(my $line = <CONFIG_DAT_H>) {
if($line !~ /^\#/) {
chomp($line);
my #words = split(/\s*\=\>\s/, $line);
%product_names->{$words[0]} = $words[1];
}
}
close(CONFIG_DAT_H);
print Dumper (%product_names);
My parsing is working for the most part that I can find all of my data in the hash, but when I print it using the Data::Dumper it doesn't print it properly. This is my output.
$VAR1 = 'ABC';
';AR2 = 'Euro
$VAR3 = 'XYZ';
';AR4 = 'USA
$VAR5 = 'PQR';
';AR6 = 'India
Does anybody know why the Dumper is printing the '; characters over the first two letters on my second column of data?
There is one unclear thing in the code: is *product_names a hash or a hashref?
If it is a hash, you should use %product_names{key} syntax, not %product_names->{key}, and need to pass a reference to Data::Dumper, so Dumper(\%product_names).
If it is a hashref then it should be labelled with a correct sigil, so $product_names->{key} and Dumper($product_names}.
As noted by mob if your input has anything other than \n it need be cleaned up more explicitly, say with s/\s*$// per comment. See the answer by ikegami.
I'd also like to add, the loop can be simplified by loosing the if branch
open my $config_dat_h, "<", $config_data or die "Can't open $config_data: $!";
while (my $line = <$config_dat_h>)
{
next if $line =~ /^\#/; # or /^\s*\#/ to account for possible spaces
# ...
}
I have changed to the lexical filehandle, the recommended practice with many advantages. I have also added a check for open, which should always be in place.
Humm... this appears wrong to me, even you're using Perl6:
%product_names->{$words[0]} = $words[1];
I don't know Perl6 very well, but in Perl5 the reference should be like bellow considering that %product_names exists and is declared:
$product_names{...} = ... ;
If you could expose the full code, I can help to solve this problem.
The file uses CR LF as line endings. This would become evident by adding the following to your code:
local $Data::Dumper::Useqq = 1;
You could convert the file to use unix line endings (seeing as you are on a unix system). This can be achieved using the dos2unix utility.
dos2unix config.dat
Alternatively, replace
chomp($line);
with the more flexible
$line =~ s/\s+\z//;
Note: %product_names->{$words[0]} makes no sense. It happens to do what you want in old versions of Perl, but it rightfully throws an error in newer versions. $product_names{$words[0]} is the proper syntax for accessing the value of an element of a hash.
Tip: You should be using print Dumper(\%product_names); instead of print Dumper(%product_names);.
Tip: You might also find local $Data::Dumper::Sortkeys = 1; useful. Data::Dumper has such bad defaults :(
Tip: Using split(/\s*=>\s*/, $line, 2) instead of split(/\s*=>\s*/, $line) would permit the value to contain =>.
Tip: You shouldn't use global variable without reason. Use open(my $CONFIG_DAT_H, ...) instead of open(CONFIG_DAT_H, ...), and replace other instances of CONFIG_DAT_H with $CONFIG_DAT_H.
Tip: Using next if $line =~ /^#/; would avoid a lot of indenting.

Perl split() Function Not Handling Pipe Character Saved As A Variable

I'm running into a little trouble with Perl's built-in split function. I'm creating a script that edits the first line of a CSV file which uses a pipe for column delimitation. Below is the first line:
KEY|H1|H2|H3
However, when I run the script, here is the output I receive:
Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|
I have a feeling that Perl doesn't like the fact that I use a variable to actually do the split, and in this case, the variable is a pipe. When I replace the variable with an actual pipe, it works perfectly as intended. How could I go about splitting the line properly when using pipe delimitation, even when passing in a variable? Also, as a silly caveat, I don't have permissions to install an external module from CPAN, so I have to stick with built-in functions and modules.
For context, here is the necessary part of my script:
our $opt_h;
our $opt_f;
our $opt_d;
# Get user input - filename and delimiter
getopts("f:d:h");
if (defined($opt_h)) {
&print_help;
exit 0;
}
if (!defined($opt_f)) {
$opt_f = &promptUser("Enter the Source file, for example /qa/data/testdata/prod.csv");
}
if (!defined($opt_d)) {
$opt_d = "\|";
}
my $delimiter = "\|";
my $temp_file = $opt_f;
my #temp_file = split(/\./, $temp_file);
$temp_file = $temp_file[0]."_add-headers.".$temp_file[1];
open(source_file, "<", $opt_f) or die "Err opening $opt_f: $!";
open(temp_file, ">", $temp_file) or die "Error opening $temp_file: $!";
my $source_header = <source_file>;
my #source_header_columns = split(/${delimiter}/, $source_header);
chomp(#source_header_columns);
for (my $i=1; $i<=scalar(#source_header_columns); $i++) {
print temp_file "Col$i";
print temp_file "$delimiter";
}
print temp_file "\n";
while (my $line = <source_file>) {
print temp_file "$line";
}
close(source_file);
close(temp_file);
The first argument to split is a compiled regular expression or a regular expression pattern. If you want to split on text |. You'll need to pass a pattern that matches |.
quotemeta creates a pattern from a string that matches that string.
my $delimiter = '|';
my $delimiter_pat = quotemeta($delimiter);
split $delimiter_pat
Alternatively, quotemeta can be accessed as \Q..\E inside double-quoted strings and the like.
my $delimiter = '|';
split /\Q$delimiter\E/
The \E can even be omitted if it's at the end.
my $delimiter = '|';
split /\Q$delimiter/
I mentioned that split also accepts a compiled regular expression.
my $delimiter = '|';
my $delimiter_re = qr/\Q$delimiter/;
split $delimiter_re
If you don't mind hardcoding the regular expression, that's the same as
my $delimiter_re = qr/\|/;
split $delimiter_re
First, the | isn't special inside doublequotes. Setting $delimiter to just "|" and then making sure it is quoted later would work or possibly setting $delimiter to "\\|" would be ok by itself.
Second, the | is special inside regex so you want to quote it there. The safest way to do that is ask perl to quote your code for you. Use the \Q...\E construct within the regex to mark out data you want quoted.
my #source_header_columns = split(/\Q${delimiter}\E/, $source_header);
see: http://perldoc.perl.org/perlre.html
It seems as all you want to do is count the fields in the header, and print the header. Might I suggest something a bit simpler than using split?
my $str="KEY|H1|H2|H3";
my $count=0;
$str =~ s/\w+/"Col" . ++$count/eg;
print "$str\n";
Works with most any delimeter (except alphanumeric and underscore), it also saves the number of fields in $count, in case you need it later.
Here's another version. This one uses the character class brackets instead, to specify "any character but this", which is just another way of defining a delimeter. You can specify delimeter from the command-line. You can use your getopts as well, but I just used a simple shift.
my $d = shift || '[^|]';
if ( $d !~ /^\[/ ) {
$d = '[^' . $d . ']';
}
my $str="KEY|H1|H2|H3";
my $count=0;
$str =~ s/$d+/"Col" . ++$count/eg;
print "$str\n";
By using the brackets, you do not need to worry about escaping metacharacters.
#!/usr/bin/perl
use Data::Dumper;
use strict;
my $delimeter="\\|";
my $string="A|B|C|DD|E";
my #arr=split(/$delimeter/,$string);
print Dumper(#arr)."\n";
output:
$VAR1 = 'A';
$VAR2 = 'B';
$VAR3 = 'C';
$VAR4 = 'DD';
$VAR5 = 'E';
seems you need define delimeter as \\|