when I have a simple file like
Ann Math 99
Bob Math 100
Ann Chemistry 92
Ann History 78
I may split it into files per person with
awk '{print > $1}' input_filename
However, when the file becomes complex, it is no longer possible to do so unless I use a very complex regex as a field separator. I find that I can extract output filename with some regex, and the following command seems to be able to do what I want for a test with 5 lines:
sed 5q input_filename | perl -nle 'if(/\[([A-Za-z0-9_]+)\]/){open(FH,">","$1"); print FH $_; close FH}'
but the file is large and the command seems to be inefficient. Are there better ways to do it?
original files are like this:
SOME_VERY_LONG_STUFF[TAG1]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG2]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG3]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG1]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG3]SOME_EVEN_LONGER_STUFF
...
and I just want to split it into files with name TAG1, TAG2, TAG3..., each file contains and only contains lines in the original file that has the tag in the bracket.
the first line with small modifications:
Nov 30 18:00:00 something#syslog: [2019-11-30 18:00:00][BattleEnd],{"result":1,"life":[[0,30,30],[1,30,30],[2,30,29],[3,30,29],[4,30,29],[5,28,29],[6,28,21],[7,28,21],[8,28,14],[9,28,14],[10,29,13],[11,21,13],[12,21,13],[13,15,13],[14,16,12],[15,12,12],[16,12,12],[17,9,12],[18,9,12],[19,5,12],[20,5,12],[21,3,12],[22,3,12],[23,1,12],[24,1,10],[25,1,10],[26,1,10],[27,1,10],[28,2,9],[29,-1,9]],"Info":[[160,0],[161,0],[162,0],[163,0],[155,0],[157,0],[158,0],[159,0]],"cards":[11401,11409,11408,12201,12208,10706,12002,10702,12207,12204,12001,12007,12208,10702,12005,10701,12005,11404,10705,10705,12007,11401,10706,12002,12001,12204,10701,12207,11404,11409,11408,12201]}
the tag I want is "BattleEnd". I want to split the log according to log sources.
EDIT: Since OP changed samples so adding this code now, completely based on shown samples of OP.
awk -F"[][]" '{print >> ($4);close($4)}' Input_file
OR if you want to close output files(to avoid too many files opened error) on whenever previous field is NOT matched then try following.
awk -F"[][]" 'prev!=$4{close(prev)} {print >> ($4);prev=$4}' Input_file
Could you please try following, based on your shown samples.
awk '
match($0,/[^]]*/){
val=substr($0,RSTART,RLENGTH)
sub(/.*\[/,"",val)
print >> (val)
close(val)
}
' Input_file
I have a number of paragraphs that have returns at the end of a line. I do not want returns at the end of lines, I will let the layout program take care of that. I would like to remove the returns, and replace them with spaces.
The issue is that I do want returns in between paragraphs. So, if there is more than one return in a row (2, 3, etc) I would like to keep two returns.
This would allow for there to be paragraphs, with one blank line between then, but all other formatting for lines would be removed. This would allow the layout program to worry about the line breaks, and not the have the breaks determined by a set number of characters, as they are now.
I would like to use Perl to accomplish this change, but am open to other methods.
example text:
This is a test.
This is just a test.
This too is a test.
This too is just a test.
would become:
This is a test. This is just a test.
This too is a test. This too is just a test.
Can this be done easily?
Using a perl one-liner. Replace 2 or more newlines with just 2. Strip all single newlines:
perl -0777 -pe 's{(\n{2})\n*|\n}{$1//" "}eg' file.txt > newfile.txt
Switches:
-0777: Slurps the entire file
-p: Creates a while(<>){...; print} loop for each “line” in your input file.
-e: Tells perl to execute the code on command line.
I came up with another solution and also wanted to explain what your regex was matching.
Matt#MattPC ~/perl/testing/8
$ cat input.txt
This is a test.
This is just a test.
This too is a test.
This too is just a test.
another test.
test.
Matt#MattPC ~/perl/testing/8
$ perl -e '$/ = undef; $_ = <>; s/(?<!\n)\n(?!\n)/ /g; s/\n{2,}/\n\n/g; print' input.txt
This is a test. This is just a test.
This too is a test. This too is just a test.
another test. test.
I basically just wrote a perl program and mashed it into a one-liner. It would normally look like this.
# First two lines read in the whole file
$/ = undef;
$_ = <>;
# This regex replaces every `\n` by a space
# if it is not preceded or followed by a `\n`
s/(?<!\n)\n(?!\n)/ /g;
# This replaces every two or more \n by two \n
s/\n{2,}/\n\n/g;
# finally print $_
print;
perl -p -i -e 's/(\w+|\s+)[\r\n]/$1 /g' abc.txt
Part of the problem here is what you are matching. (\w+|\s+) matches one of more word characters, which is the same as [a-zA-Z0-9_], OR one or more whitespace characters, which is the same as [\t\n\f\r ].
This wouldn't match your input, since you aren't matching periods, and no line consists of only white space or only characters (even the blank lines would need two whitespace characters to match it, since we have [\r\n] at the end). Plus, neither would match a period.
Normally, I do something like
IFS=','
columns=( $LINE )
where $LINE is a line from a csv file I'm reading.
However, how do I handle a csv file with embedded commas? I have to handle several hundred gigs of file so everything needs to be done quickly, i.e., no multiple readings of a line, definitely no loops (last time I tried that slowed it down several factors).
The general structure of the code is as follows
FILENAME=$1
cat $FILENAME | while read LINE
do
IFS=","
columns=( $LINE )
# affect columns changes here
newline="${columns[*]}"
echo "$newline"
done
Preferably, I need something that goes
FILENAME=$1
cat $FILENAME | while read LINE
do
IFS=","
# code to tell bash to ignore if IFS is within an open quote
columns=( $LINE )
# affect columns changes here
newline="${columns[*]}"
echo "$newline"
done
Any tips would be appreciated. Otherwise, I'll probably switch to using another language to handle this stuff.
Probably embedded commas is just the first obvious problem that you encountered while parsing those CSV files.
Future problems that might popped are:
embedded newline separator characters
embedded utf8 chars
special treatment for whitespaces, empty fields, spaces around commas, undef values
I generally tend to follow the philosophy that If there is a (reputable) module that parses some
format you have to parse, use it instead of making a homebrew
I don't think there is such a thing for bash, but there are some for Perl. I'd go for Text::CSV_XS. Being written in C I expect it to be very fast.
You can use sed or something similar to convert the commas within quotes to some other sequence or punctuation. If you don't care about the stuff in quotes then you do not even need to change them back. You can do this on the whole file:
sed 's/\("[^,"]*\),\([^"]*"\)/\1;\2/g' input.csv > intermediate.csv
or on each line:
line=$(echo $line | sed 's/\("[^,"]*\),\([^"]*"\)/\1;\2/g')
This isn't a complete answer, but it's a possible approach.
Find a character that never occurs in the input file. Use a C program that parses the CSV file and prints lines to standard output with a different delimiter. Writing that program is left as an exercise, but I'm sure there's CSV-parsing C source code out there. Pipe the output of the C program into your script.
For example:
FILENAME=$1
new_c_program $FILENAME | while read LINE
do
IFS="|"
# code to tell bash to ignore if IFS is within an open quote
columns=( $LINE )
# affect columns changes here
newline="${columns[*]}"
echo "$newline"
done
A minor point: I'd pick a name other than $newline; newline suggests an end-of-line marker rather than an entire line.
Another minor point: you have a "Useless Use Of cat" in the code in your question. You could replace this:
cat $FILENAME | while read LINE
do
...
done
by this:
while read LINE
do
...
done < $FILENAME
But if you replace cat by the hypothetical C program I suggested, you still need the pipe.
I have a text file which looks something like this:
jdkjf
kjsdh
jksfs
lksfj
gkfdj
gdfjg
lkjsd
hsfda
gadfl
dfgad
[very many lines, that is]
but would rather like it to look like
jdkjf kjsdh
jksfs lksfj
gkfdj gdfjg
lkjsd hsfda
gadfl dfgad
[and so on]
so I can print the text file on a smaller number of pages.
Of course, this is not a difficult problem, but I'm wondering if there is some excellent tool out there for solving problems like these.
EDIT: I'm not looking for a way to remove every other newline from a text file, but rather a tool which interprets text as "pictures" and then lays these out on the page nicely (by writing the appropriate whitespace symbols).
You can use this python code.
tables=input("Enter number of tables ")
matrix=[]
file=open("test.txt")
for line in file:
matrix.append(line.replace("\n",""))
if (len(matrix)==int(tables)):
print (matrix)
matrix=[]
file.close()
(Since you don't name your operating system, I'll simply assume Linux, Mac OS X or some other Unix...)
Your example looks like it can also be described by the expression "joining 2 lines together".
This can be achieved in a shell (with the help of xargs and awk) -- but only for an input file that is structured like your example (the result always puts 2 words on a line, irrespective of how many words each one contains):
cat file.txt | xargs -n 2 | awk '{ print $1" "$2 }'
This can also be achieved with awk alone (this time it really joins 2 full lines, irrespective of how many words each one contains):
awk '{printf $0 " "; getline; print $0}' file.txt
Or use sed --
sed 'N;s#\n# #' < file.txt
Also, xargs could do it:
xargs -L 2 < file.txt
I'm sure other people could come up with dozens of other, quite different methods and commandline combinations...
Caveats: You'll have to test for files with an odd number of lines explicitly. The last input line may not be processed correctly in case of odd number of lines.
I am parsing a log file trying to pull out the lines where the phrase "failures=" is a unique non-zero digit.
The first part of my perl one liner will pull out all the lines where "failures" are greater than zero. But that part of the log file repeats until a new failure occurs, i.e., after the first failure the log entries will be "failures=1" until the second error then it will read, "failures=2".
What I'd like to do is pull only the first line where that value changes and I thought I had it with this:
cat -n Logstats.out | perl -nle 'print "Line No. $1: failures=$2; eventDelta=$3; tracking_id=$4" if /\s(\d+)\t.*failures=(\d+).*eventDelta=(.\d+).*tracking_id="(\d+)"/ && $2 > 0' | perl -ne 'print unless $a{/failures=\d+/}++'
However, that only pulls the first non-zero "failure" line and nothing else. What do I need to change for it to pull all the unique values for "failures"?
thanks in advance!
Update: The amount of text in each line up to the "tracking_id" is more text than I can post. Sorry!
2011-09-06 14:14:18 [INFO] [logStats]: name=somename id=d6e6f4f0-4c0d-93b6-7a71-8e3100000030
successes=1 failures=0 skipped=0 eventDelta=41 original=188 simulated=229
totalDelta=41 averageDelta=41 min=0 max=41 averageOriginalDuration=188 averageSimulatedDuriation=229(txid = b3036eca-6288-48ef-166f-5ae200000646
date = 2011-09-02 08:00:00 type = XML xml
=
perl -ne 'print unless $a{/failures=\d+/}++'
does not work because a hash subscript is evaluated in scalar context, and the m// operator does not return the match in scalar context. Instead, it returns 1. So (since every line matches), what you wrote is equivalent to:
perl -ne 'print unless $a{1}++'
and I think you can see the problem there.
There's a number of ways to work around that, but I'd use:
perl -ne 'print unless /(failures=\d+)/ and $a{$1}++'
However, I'd do the whole thing in one call to perl, including numbering the lines:
perl -nle '
print "Line No. $.: failures=$1; eventDelta=$2; tracking_id=$3"
if /failures=(\d+).*?eventDelta=(.\d+).*?tracking_id="(\d+)"/
&& $1 > 0
&& !$seen{$1}++
' Logstats.out
($. automatically counts input lines in Perl. The line breaks can be removed if desired, but it will actually work as is.)
you could use a hash to store te results and print it:
perl -nle '$f{$2} ||= "Line No. $1: failures=$2; eventDelta=$3; tracking_id=$4" if /\s(\d+)\t.*failures=(\d+).*eventDelta=(.\d+ ).*tracking_id="(\d+)"/ && $2;END{ print $f{$_} for keys %f }' Logstats.out
(not tested due to missing input data...)
HTH,
Paul
Since your input does not match your regex, I can't really help you. But I can tell you that this is doing a lot of backtracking--and that's bad if there is a lot of data after the part that you're interested in.
So here is some alternative ideas:
qr{ \s # a single space
failures=(\d+) # the entry for failures
\s+ # at least one space
skipped=\d+ # skipped
\s+
eventDelta=(.\d+)
.*? # any number of non-newline characters *UNTIL* ...
\btracking_id="(\d+)" # another specified sequence of data
}x;
The parser will scan "skipped=" and then a group of digits a lot faster than scanning the rest of the line and backtracking when it fails back to 'eventDelta=', it is better to put it in, if you know it will always be there.
Since you don't put tracking_id in your example, I can't tell how it occurs, so in this case we used a non-greedy any match which will always be looking for the next sequence. Again, if there is a lot of data in the line, then you do not want to scan to then end and backtrack until you find that you've already read 'tracking_id="nnn"'. However, lookaheads cost processing time, it is still better to spell out 'skipped=' and all possible values then a non-greedy "any match".
You'll also notice that after accepting any data, I specify that tracking_id should appear at a word boundary, which disambiguates it from the possible--though not likely 'backtracking_id='.