I hope this question isn't too basic - I'm pretty unexperienced with
perl. My problem: I want to read and process a file in chunks, but the
delimiters of the chunks may vary. I have the entire file in a variable
$text. As an example:
One
Two
BEGIN
Three
Four
END
Five
I want to step through this file by chunks. I want to read until the next
empty line and save (and process) the result as one chunk, so "One" and
"Two" would be the first two chunks. If the new chunk begins with the
keyword "BEGIN," I want to read and process until the keyword "END," so the
chunk would be "Three \n Four." How would I do this in perl?
I have read about the "index" function, but couldn't make it step through
my $text.
Thanks a lot!
You could set the input record seperator to an empty string to enable "paragraph" mode. Then use the flip-flop operator to handle the range between BEGIN/END; something like:
perl -nle '$/="";if (/^BEGIN/../^END/) {print "> $_"} else {print "[ $_ ]"}' myfile
Related
I am working on a TPT script to process some large files we have. Right now, each record length in the file has a delimiter, |.
The problem is that not all fields are used by each record. For example, record 1 may have 100 fields and record 2 may have 260. For TPT to work, we need to have a delimiter for each field, so the records that have less than 261 fields populated, I need to append the appropriate number of pipes to the end of each record.
So, taking my example above, record one would have 161 pipes appended to the end and record two would have 1.
I have a perl script which will count the number of pipes in each record, but I am not sure how to take that info and accomplish the task of appending that many pipes to the field.
perl -ne 'print scalar(split(/\|/, $_)) . "\n"'
Any advice?
To get the number of pipe symbols, you can use the tr operator.
my $count = tr/|//;
Just subtract the number of pipe symbols from the maximal number to get the number of pipes to add, use the x (times) operator to get them:
perl -lne 'print $_, "|" x (260 - tr/|//)'
I'm not sure the number is correct, it depends on whether the pipes also start or end the line.
This was the original question.
Using perl, how can I detect from the command line if a specified file contains only a specified character(s), like '0' for example?
I tried
perl -ne 'print if s/(^0*$)/yes/' filename
But it cannot detect all conditions, for example multiple lines, non-zero lines.
Sample input -
A file containing only zeros -
0000000000000000000000000000000000000000000000000000000000000
output - "yes"
Empty file
output - "no"
File containing zeros but has newline
000000000000000000
000000000000
output - "no"
File containing mixture
0324234-234-324000324200000
output - "no"
-0777 causes $/ to be set to undef, causing the whole file to be read when you read a line, so
perl -0777ne'print /^0+$/ ? "yes" : "no"' file
or
perl -0777nE'say /^0+$/ ? "yes" : "no"' file # 5.10+
Use \z instead of $ if want to make sure there's no trailing newline. (A text file should have a trailing newline.)
To print yes if a file contains at least one 0 character and nothing else, and otherwise no, write
perl -0777 -ne 'print /\A0+\z/ ? "yes" : "no"' myfile
I suspect you want a more generic solution than just detecting zeroes, but I haven't got time to write it for you till tomorrow. Anyway, here is what I think you need to do:
1. Slurp your entire file into a single string "s" and get its length (call it "L")
2. Get the first character of the string, using substr(s,0,1)
3. Create a second string that repeats the first character "L" times, using firstchar x L
4. Check the second string is equal to the slurped file
5. Print "No" if not equal else print "Yes"
If your file is big and you don't want to hold two copies in memory, just test character by character using substr(). If you want to ignore newlines and carriage returns, just use "tr" to delete them from "s" before step 2.
Normally, I do something like
IFS=','
columns=( $LINE )
where $LINE is a line from a csv file I'm reading.
However, how do I handle a csv file with embedded commas? I have to handle several hundred gigs of file so everything needs to be done quickly, i.e., no multiple readings of a line, definitely no loops (last time I tried that slowed it down several factors).
The general structure of the code is as follows
FILENAME=$1
cat $FILENAME | while read LINE
do
IFS=","
columns=( $LINE )
# affect columns changes here
newline="${columns[*]}"
echo "$newline"
done
Preferably, I need something that goes
FILENAME=$1
cat $FILENAME | while read LINE
do
IFS=","
# code to tell bash to ignore if IFS is within an open quote
columns=( $LINE )
# affect columns changes here
newline="${columns[*]}"
echo "$newline"
done
Any tips would be appreciated. Otherwise, I'll probably switch to using another language to handle this stuff.
Probably embedded commas is just the first obvious problem that you encountered while parsing those CSV files.
Future problems that might popped are:
embedded newline separator characters
embedded utf8 chars
special treatment for whitespaces, empty fields, spaces around commas, undef values
I generally tend to follow the philosophy that If there is a (reputable) module that parses some
format you have to parse, use it instead of making a homebrew
I don't think there is such a thing for bash, but there are some for Perl. I'd go for Text::CSV_XS. Being written in C I expect it to be very fast.
You can use sed or something similar to convert the commas within quotes to some other sequence or punctuation. If you don't care about the stuff in quotes then you do not even need to change them back. You can do this on the whole file:
sed 's/\("[^,"]*\),\([^"]*"\)/\1;\2/g' input.csv > intermediate.csv
or on each line:
line=$(echo $line | sed 's/\("[^,"]*\),\([^"]*"\)/\1;\2/g')
This isn't a complete answer, but it's a possible approach.
Find a character that never occurs in the input file. Use a C program that parses the CSV file and prints lines to standard output with a different delimiter. Writing that program is left as an exercise, but I'm sure there's CSV-parsing C source code out there. Pipe the output of the C program into your script.
For example:
FILENAME=$1
new_c_program $FILENAME | while read LINE
do
IFS="|"
# code to tell bash to ignore if IFS is within an open quote
columns=( $LINE )
# affect columns changes here
newline="${columns[*]}"
echo "$newline"
done
A minor point: I'd pick a name other than $newline; newline suggests an end-of-line marker rather than an entire line.
Another minor point: you have a "Useless Use Of cat" in the code in your question. You could replace this:
cat $FILENAME | while read LINE
do
...
done
by this:
while read LINE
do
...
done < $FILENAME
But if you replace cat by the hypothetical C program I suggested, you still need the pipe.
I am parsing a log file trying to pull out the lines where the phrase "failures=" is a unique non-zero digit.
The first part of my perl one liner will pull out all the lines where "failures" are greater than zero. But that part of the log file repeats until a new failure occurs, i.e., after the first failure the log entries will be "failures=1" until the second error then it will read, "failures=2".
What I'd like to do is pull only the first line where that value changes and I thought I had it with this:
cat -n Logstats.out | perl -nle 'print "Line No. $1: failures=$2; eventDelta=$3; tracking_id=$4" if /\s(\d+)\t.*failures=(\d+).*eventDelta=(.\d+).*tracking_id="(\d+)"/ && $2 > 0' | perl -ne 'print unless $a{/failures=\d+/}++'
However, that only pulls the first non-zero "failure" line and nothing else. What do I need to change for it to pull all the unique values for "failures"?
thanks in advance!
Update: The amount of text in each line up to the "tracking_id" is more text than I can post. Sorry!
2011-09-06 14:14:18 [INFO] [logStats]: name=somename id=d6e6f4f0-4c0d-93b6-7a71-8e3100000030
successes=1 failures=0 skipped=0 eventDelta=41 original=188 simulated=229
totalDelta=41 averageDelta=41 min=0 max=41 averageOriginalDuration=188 averageSimulatedDuriation=229(txid = b3036eca-6288-48ef-166f-5ae200000646
date = 2011-09-02 08:00:00 type = XML xml
=
perl -ne 'print unless $a{/failures=\d+/}++'
does not work because a hash subscript is evaluated in scalar context, and the m// operator does not return the match in scalar context. Instead, it returns 1. So (since every line matches), what you wrote is equivalent to:
perl -ne 'print unless $a{1}++'
and I think you can see the problem there.
There's a number of ways to work around that, but I'd use:
perl -ne 'print unless /(failures=\d+)/ and $a{$1}++'
However, I'd do the whole thing in one call to perl, including numbering the lines:
perl -nle '
print "Line No. $.: failures=$1; eventDelta=$2; tracking_id=$3"
if /failures=(\d+).*?eventDelta=(.\d+).*?tracking_id="(\d+)"/
&& $1 > 0
&& !$seen{$1}++
' Logstats.out
($. automatically counts input lines in Perl. The line breaks can be removed if desired, but it will actually work as is.)
you could use a hash to store te results and print it:
perl -nle '$f{$2} ||= "Line No. $1: failures=$2; eventDelta=$3; tracking_id=$4" if /\s(\d+)\t.*failures=(\d+).*eventDelta=(.\d+ ).*tracking_id="(\d+)"/ && $2;END{ print $f{$_} for keys %f }' Logstats.out
(not tested due to missing input data...)
HTH,
Paul
Since your input does not match your regex, I can't really help you. But I can tell you that this is doing a lot of backtracking--and that's bad if there is a lot of data after the part that you're interested in.
So here is some alternative ideas:
qr{ \s # a single space
failures=(\d+) # the entry for failures
\s+ # at least one space
skipped=\d+ # skipped
\s+
eventDelta=(.\d+)
.*? # any number of non-newline characters *UNTIL* ...
\btracking_id="(\d+)" # another specified sequence of data
}x;
The parser will scan "skipped=" and then a group of digits a lot faster than scanning the rest of the line and backtracking when it fails back to 'eventDelta=', it is better to put it in, if you know it will always be there.
Since you don't put tracking_id in your example, I can't tell how it occurs, so in this case we used a non-greedy any match which will always be looking for the next sequence. Again, if there is a lot of data in the line, then you do not want to scan to then end and backtrack until you find that you've already read 'tracking_id="nnn"'. However, lookaheads cost processing time, it is still better to spell out 'skipped=' and all possible values then a non-greedy "any match".
You'll also notice that after accepting any data, I specify that tracking_id should appear at a word boundary, which disambiguates it from the possible--though not likely 'backtracking_id='.
I have a script that reads a large file line by line. The record separator ($/) that I would like to use is (\n). The only problem is that the data on each line contains CRLF characters (\r\n), which the program should not be considered the end of a line.
For example, here is a sample data file (with the newlines and CRLFs written out):
line1contents\n
line2contents\n
line3\r\ncontents\n
line4contents\n
If I set $/ = "\n", then it splits the third line into two lines. Ideally, I could just set $/ to a regex that matches \n and not \r\n, but I don't think that's possible. Another possibility is to read in the whole file, then use the split function to split on said regex. The only problem is that the file is too large to load into memory.
Any suggestions?
For this particular task, it sounds pretty straightforward to check your line ending and append the next line as necessary:
$/ = "\n";
...
while(<$input>) {
while( substr($_,-2) eq "\r\n" ) {
$_ .= <$input>;
}
...
}
This is the same logic used to support line continuation in a number of different programming contexts.
You are right that you can't set $/ to a regular expression.
dos2unix would put a UNIX newline character in for the "\r\n" and so wouldn't really solve the problem. I would use a regex that replaces all instances of "\r\n" with a space or tab character and save the results to a different file (since you don't want to split the line at those points). Then I would run your script on the new file.
Try using dos2unix on the file first, and then read in as normal.