when I have a simple file like
Ann Math 99
Bob Math 100
Ann Chemistry 92
Ann History 78
I may split it into files per person with
awk '{print > $1}' input_filename
However, when the file becomes complex, it is no longer possible to do so unless I use a very complex regex as a field separator. I find that I can extract output filename with some regex, and the following command seems to be able to do what I want for a test with 5 lines:
sed 5q input_filename | perl -nle 'if(/\[([A-Za-z0-9_]+)\]/){open(FH,">","$1"); print FH $_; close FH}'
but the file is large and the command seems to be inefficient. Are there better ways to do it?
original files are like this:
SOME_VERY_LONG_STUFF[TAG1]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG2]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG3]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG1]SOME_EVEN_LONGER_STUFF
SOME_VERY_LONG_STUFF[TAG3]SOME_EVEN_LONGER_STUFF
...
and I just want to split it into files with name TAG1, TAG2, TAG3..., each file contains and only contains lines in the original file that has the tag in the bracket.
the first line with small modifications:
Nov 30 18:00:00 something#syslog: [2019-11-30 18:00:00][BattleEnd],{"result":1,"life":[[0,30,30],[1,30,30],[2,30,29],[3,30,29],[4,30,29],[5,28,29],[6,28,21],[7,28,21],[8,28,14],[9,28,14],[10,29,13],[11,21,13],[12,21,13],[13,15,13],[14,16,12],[15,12,12],[16,12,12],[17,9,12],[18,9,12],[19,5,12],[20,5,12],[21,3,12],[22,3,12],[23,1,12],[24,1,10],[25,1,10],[26,1,10],[27,1,10],[28,2,9],[29,-1,9]],"Info":[[160,0],[161,0],[162,0],[163,0],[155,0],[157,0],[158,0],[159,0]],"cards":[11401,11409,11408,12201,12208,10706,12002,10702,12207,12204,12001,12007,12208,10702,12005,10701,12005,11404,10705,10705,12007,11401,10706,12002,12001,12204,10701,12207,11404,11409,11408,12201]}
the tag I want is "BattleEnd". I want to split the log according to log sources.
EDIT: Since OP changed samples so adding this code now, completely based on shown samples of OP.
awk -F"[][]" '{print >> ($4);close($4)}' Input_file
OR if you want to close output files(to avoid too many files opened error) on whenever previous field is NOT matched then try following.
awk -F"[][]" 'prev!=$4{close(prev)} {print >> ($4);prev=$4}' Input_file
Could you please try following, based on your shown samples.
awk '
match($0,/[^]]*/){
val=substr($0,RSTART,RLENGTH)
sub(/.*\[/,"",val)
print >> (val)
close(val)
}
' Input_file
An input file is given, each line of which contains delimited data with extra delimiter at the end in data/header with or without enclosures.
Extra delimiter at the end it can contain with/without spaces.
Scenario 1 : Header & Data contain extra delimiter at the end
eno|ename|address|
A|B|C|
D|E|F|
Scenario 2 : Header doesn't contain extra delimiter at the end
eno|ename|address
A|B|C|
D|E|F|
Scenario 3 : With enclosures
eno|ename|address|
1|2|"A"|
Final output has to be like
Scenario 1 :
eno|ename|address
A|B|C
D|E|F
Scenario 2 :
eno|ename|address
A|B|C
D|E|F
Scenario 3 :
eno|ename|address
1|2|"A"
Solution which i have tried so far. But below solution won't work for all three scenarios is there anyway which i can make single command to support all the three scenarios in Sed/Awk/Perl
perl -pne 's/(.*)\|/$1/' filename
Could you please try following.
awk '{gsub(/\|$|\| +$/,"")} 1' Input_file
Explanation:
gsub is awk function which Globally substitute matched pattern with mentioned value.
Explanation of regex:
/\|$|\| +$/: Here there are 2 parts of regex. First is /\|$ and second is +$ which is segrigated with | where 1st regex is for removing | from last of the line and second regex removes | with space at last. So it basically takes care of both conditions successfully.
perl -lpe 's/\|\s*$//' file
will do it. That only removes pipes followed by optional whitespace at the end of each line. Note the $ line anchor.
I added the -l since each line's newline will get removes by the s/// command, and -l will put it back.
All you need is this:
sed 's/|$//'
A bit more generic. Let's assume you have the same problem, but with different field separators in different files. Some of these field separators are regular expressions (e.g. a sequence of blanks), others are just a single character c. With a tiny little awk program you can get far:
# remove_last_empty_field.awk
# 1. Get the correct `fs`
BEGIN { fs=FS; if(length(FS)==1) fs=(FS==" ") ? "[[:blank:]]+" : "["FS"]" }
# remove the empty field
{ sub(fs"$","") }
# Print the current record
1
Now you can run this on your various files as:
$ awk -f remove_last_empty_field.awk f1.txt
$ awk -f remove_last_empty_field.awk FS="|" f2.txt
$ awk -f remove_last_empty_field.awk FS="[|.*]" f3.txt
perl -pi -e 's/\|$//' Your_FIle
I have a huge file that contains lines that follow this format:
New-England-Center-For-Children-L0000392290
Southboro-Housing-Authority-L0000392464
Crew-Star-Inc-L0000391998
Saxony-Ii-Barber-Shop-L0000392491
Test-L0000392334
What I'm trying to do is narrow it down to just this:
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Test
Can anyone help with this?
Using GNU awk:
awk -F\- 'NF--' OFS=\- file
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
Set the input and output field separator to -.
NF contains number of fields. Reduce it by 1 to remove the last field.
Using sed:
sed 's/\(.*\)-.*/\1/' file
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
Simple greedy regex to match up to the last hyphen.
In replacement use the captured group and discard the rest.
Version 1 of the Question
The first version of the input was in the form of HTML and parts had to be removed both before and after the desired text:
$ sed -r 's|.*[A-Z]/([a-zA-Z-]+)-L0.*|\1|' input
Special-Restaurant
Eliot-Cleaning
Kennedy-Plumbing
Version 2 of the Question
In the revised question, it is only necessary to remove the text that starts with -L00:
$ sed 's|-L00.*||' input2
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
Both of these commands use a single "substitute" command. The command has the form s|old|new|.
The perl code for this would be: perl -nle'print $1 if(m{-.*?/(.*?-.*?)-})
We can break the Regex down to matching the following:
- for that's between the city and state
.*? match the smallest set of character(s) that makes the Regex work, i.e. the State
/ matches the slash between the State and the data you want
( starts the capture of the data you are interested in
.*?-.*? will match the data you care about
) will close out the capture
- will match the dash before the L####### to give the regex something to match after your data. This will prevent the minimal Regex from matching 0 characters.
Then the print statement will print out what was captured (your data).
awk likes these things:
$ awk -F[/-] -v OFS="-" '{print $(NF-3), $(NF-2)}' file
Special-Restaurant
Eliot-Cleaning
Kennedy-Plumbing
This sets / and - as possible field separators. Based on them, it prints the last_field-3 and last_field-2 separated by the delimiter -. Note that $NF stands for last parameter, hence $(NF-1) is the penultimate, etc.
This sed is also helpful:
$ sed -r 's#.*/(\w*-\w*)-\w*\.\w*</loc>$#\1#' file
Special-Restaurant
Eliot-Cleaning
Kennedy-Plumbing
It selects the block word-word after a slash / and followed with word.word</loc> + end_of_line. Then, it prints back this block.
Update
Based on your new input, this can make it:
$ sed -r 's/(.*)-L\w*$/\1/' file
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
It selects everything up to the block -L + something + end of line, and prints it back.
You can use also another trick:
rev file | cut -d- -f2- | rev
As what you want is every slice of - separated fields, let's get all of them but last one. How? By reversing the line, getting all of them from the 2nd one and then reversing back.
Here's how I'd do it with Perl:
perl -nle 'm{example[.]com/bp/(.*?)/(.*?)-L\d+[.]htm} && print $2' filename
Note: the original question was matching input lines like this:
<loc>http://www.example.com/bp/Lowell-MA/Special-Restaurant-L0000423916.htm</loc>
<loc>http://www.example.com/bp/Houston-TX/Eliot-Cleaning-L0000422797.htm</loc>
<loc>http://www.example.com/bp/New-Orleans-LA/Kennedy-Plumbing-L0000423121.htm</loc>
The -n option tells Perl to loop over every line of the file (but not print them out).
The -l option adds a newline onto the end of every print
The -e 'perl-code' option executes perl-code for each line of input
The pattern:
/regex/ && print
Will only print if the regex matches. If the regex contains capture parentheses you can refer to the first captured section as $1, the second as $2 etc.
If your regex contains slashes, it may be cleaner to use a different regex delimiter ('m' stands for 'match'):
m{regex} && print
If you have a modern Perl, you can use -E to enable modern feature and use say instead of print to print with a newline appended:
perl -nE 'm{example[.]com/bp/(.*?)/(.*?)-L\d+[.]htm} && say $2' filename
This is very concise in Perl
perl -i.bak -lpe's/-[^-]+$//' myfile
Note that this will modify the input file in-place but will keep a backup of the original data in called myfile.bak
I am very new to sed so please bear with me... I have a file with contents like
a=1
b=2,3,4
c=3
d=8
.
.
I want to append 'x' to a line which starts with 'c=' and does not contain an 'x'. What I am using right now is
sed -i '/^c=/ s/$/x/'
but this does not cover the second part of my explanation, the 'x' should only be appended if the line did not have it already and hence if I run the command twice it makes the line "c=3xx" which I do not want.
Any help here would be highly appreciated and I know there are a lot of sharp heads around here :) I understand that this can be handled pretty easily through bash but using sed here is a hard requirement.
You can do something like this:
sed -i '/^c=/ {/x/b; s/$/x/}'
Curly brackets are used for grouping. The b command branches to the end of the script (stops the processing of the current line).
b label
Branch to label; if label is omitted, branch to end of script.
Edit: as William Pursell suggests in the comment, a shorter version would be
sed -i '/^c=/ { /x/ !s/$/x/ }'
awk is probably a better choice here as you can easily combine regular expression matches with logical operators. Given the input:
$ cat file
a=1
b=2,3,4
c=3
c=x
c=3
d=8
The command would be:
$ awk '/^c=/ && !/x/ {$0=$0"x"; print $0}' file
a=1
b=2,3,4
c=3x
c=x
c=3x
d=8
Where $0 is the awk variable that contains the current line being read.
This might work for you (GNU sed):
sed -i '/^c=[^x]*$/s/$/x/' file
or:
sed -i 's/^c=[^x]*$/&x/' file
I have a text file which looks something like this:
jdkjf
kjsdh
jksfs
lksfj
gkfdj
gdfjg
lkjsd
hsfda
gadfl
dfgad
[very many lines, that is]
but would rather like it to look like
jdkjf kjsdh
jksfs lksfj
gkfdj gdfjg
lkjsd hsfda
gadfl dfgad
[and so on]
so I can print the text file on a smaller number of pages.
Of course, this is not a difficult problem, but I'm wondering if there is some excellent tool out there for solving problems like these.
EDIT: I'm not looking for a way to remove every other newline from a text file, but rather a tool which interprets text as "pictures" and then lays these out on the page nicely (by writing the appropriate whitespace symbols).
You can use this python code.
tables=input("Enter number of tables ")
matrix=[]
file=open("test.txt")
for line in file:
matrix.append(line.replace("\n",""))
if (len(matrix)==int(tables)):
print (matrix)
matrix=[]
file.close()
(Since you don't name your operating system, I'll simply assume Linux, Mac OS X or some other Unix...)
Your example looks like it can also be described by the expression "joining 2 lines together".
This can be achieved in a shell (with the help of xargs and awk) -- but only for an input file that is structured like your example (the result always puts 2 words on a line, irrespective of how many words each one contains):
cat file.txt | xargs -n 2 | awk '{ print $1" "$2 }'
This can also be achieved with awk alone (this time it really joins 2 full lines, irrespective of how many words each one contains):
awk '{printf $0 " "; getline; print $0}' file.txt
Or use sed --
sed 'N;s#\n# #' < file.txt
Also, xargs could do it:
xargs -L 2 < file.txt
I'm sure other people could come up with dozens of other, quite different methods and commandline combinations...
Caveats: You'll have to test for files with an odd number of lines explicitly. The last input line may not be processed correctly in case of odd number of lines.