Using grep and sed to extract file name and number - sed

I have a list of files in the current directory, some of those contain the keyword "speed", assuming in the same line with the keyword, I have a number.
For example, in the file "filename.txt", I have the following lines:
some text
speed: this is the keyword, and equals 150
some text
I want to use a combination of grep and sed to get the following output:
filename: 150
Currently, I can only extract file names and the line that contains the keyword using grep, but I don't know how to form the output as above using a combination of grep and sed. The grep command I have so far is:
grep -r "speed"
which gives me:
filename.txt:speed: this is the keyword, and equals 150
Any help would be appreciated!

As Wiktor Stribiżew as mentioned in the comment
The below command will provide you the desired output
awk '/speed/{print FILENAME": "$NF}' filename.txt
Explanation
/speed/ is used since that is the keyword used as a reference for extracting.
{print FILENAME": "$NF}
print FILENAME will print the respective filename
$NF which denotes the number of fields, using NF with awk will print the string or word at the last field, for this text file that is 150

Assuming the filenames do not contain colon : character, would you please try the following:
grep -r "speed" | sed -E 's/^([^:]+):[^0-9]*([0-9]+)/\1: \2/'
In the sed command:
^([^:]+) matches the filename and the 1st capture group is assigned to it.
[^0-9]* matches non-digits to be skipped.
([0-9]+) matches digits and the 2nd capture group is assigned to it.

Related

remove last delimiter in sed/awk/perl

An input file is given, each line of which contains delimited data with extra delimiter at the end in data/header with or without enclosures.
Extra delimiter at the end it can contain with/without spaces.
Scenario 1 : Header & Data contain extra delimiter at the end
eno|ename|address|
A|B|C|
D|E|F|
Scenario 2 : Header doesn't contain extra delimiter at the end
eno|ename|address
A|B|C|
D|E|F|
Scenario 3 : With enclosures
eno|ename|address|
1|2|"A"|
Final output has to be like
Scenario 1 :
eno|ename|address
A|B|C
D|E|F
Scenario 2 :
eno|ename|address
A|B|C
D|E|F
Scenario 3 :
eno|ename|address
1|2|"A"
Solution which i have tried so far. But below solution won't work for all three scenarios is there anyway which i can make single command to support all the three scenarios in Sed/Awk/Perl
perl -pne 's/(.*)\|/$1/' filename
Could you please try following.
awk '{gsub(/\|$|\| +$/,"")} 1' Input_file
Explanation:
gsub is awk function which Globally substitute matched pattern with mentioned value.
Explanation of regex:
/\|$|\| +$/: Here there are 2 parts of regex. First is /\|$ and second is +$ which is segrigated with | where 1st regex is for removing | from last of the line and second regex removes | with space at last. So it basically takes care of both conditions successfully.
perl -lpe 's/\|\s*$//' file
will do it. That only removes pipes followed by optional whitespace at the end of each line. Note the $ line anchor.
I added the -l since each line's newline will get removes by the s/// command, and -l will put it back.
All you need is this:
sed 's/|$//'
A bit more generic. Let's assume you have the same problem, but with different field separators in different files. Some of these field separators are regular expressions (e.g. a sequence of blanks), others are just a single character c. With a tiny little awk program you can get far:
# remove_last_empty_field.awk
# 1. Get the correct `fs`
BEGIN { fs=FS; if(length(FS)==1) fs=(FS==" ") ? "[[:blank:]]+" : "["FS"]" }
# remove the empty field
{ sub(fs"$","") }
# Print the current record
1
Now you can run this on your various files as:
$ awk -f remove_last_empty_field.awk f1.txt
$ awk -f remove_last_empty_field.awk FS="|" f2.txt
$ awk -f remove_last_empty_field.awk FS="[|.*]" f3.txt
perl -pi -e 's/\|$//' Your_FIle

Extract filename from multiple lines in unix

I'm trying to extract the name of the file name that has been generated by a Java program. This Java program spits out multiple lines and I know exactly what the format of the file name is going to be. The information text that the Java program is spitting out is as follows:
ABCASJASLEKJASDFALDSF
Generated file YANNANI-0008876_17.xml.
TDSFALSFJLSDJF;
I'm capturing the output in a variable and then applying a sed operator in the following format:
sed -n 's/.*\(YANNANI.\([[:digit:]]\).\([xml]\)*\)/\1/p'
The result set is:
YANNANI-0008876_17.xml.
However, my problem is that want the extraction of the filename to stop at .xml. The last dot should never be extracted.
Is there a way to do this using sed?
Let's look at what your capture group actually captures:
$ grep 'YANNANI.\([[:digit:]]\).\([xml]\)*' infile
Generated file YANNANI-0008876_17.xml.
That's probably not what you intended:
\([[:digit:]]\) captures just a single digit (and the capture group around it doesn't do anything)
\([xml]\)* is "any of x, m or l, 0 or more times", so it matches the empty string (as above – or the line wouldn't match at all!), x, xx, lll, mxxxxxmmmmlxlxmxlmxlm, xml, ...
There is no way the final period is removed because you don't match anything after the capture groups
What would make sense instead:
Match "digits or underscores, 0 or more": [[:digit:]_]*
Match .xml, literally (escape the period): \.xml
Make sure the rest of the line (just the period, in this case) is matched by adding .* after the capture group
So the regex for the string you'd like to extract becomes
$ grep 'YANNANI.[[:digit:]_]*\.xml' infile
Generated file YANNANI-0008876_17.xml.
and to remove everything else on the line using sed, we surround regex with .*\( ... \).*:
$ sed -n 's/.*\(YANNANI.[[:digit:]_]*\.xml\).*/\1/p' infile
YANNANI-0008876_17.xml
This assumes you really meant . after YANNANI (any character).
You can call sed twice: first in printing and then in replacement mode:
sed -n 's/.*\(YANNANI.\([[:digit:]]\).\([xml]\)*\)/\1/p' | sed 's/\.$//g'
the last sed will remove all the last . at the end of all the lines fetched by your first sed
or you can go for a awk solution as you prefer:
awk '/.*YANNANI.[0-9]+.[0-9]+.xml/{print substr($NF,1,length($NF)-1)}'
this will print the last field (and truncate the last char of it using substr) of all the lines that do match your regex.

Printing all words that start with "#" using sed in BASH

I have a file with a lot of text, but I want to print only words that contain "#" at the beginning. Ex:
My name is #Laura and I live in #London. Name=#Laura. City=#London
How can I print all words that start with #?.I did this the following and it worked, but I want to do it using sed. I tried several patters, but I cannot make it print anything.
grep -o -E "#\w+" file.txt
Thanks
Use this sed command:
sed 's/[^#]*\(#[^ .]*\)/\1\n/g' file.txt
Explanation: we invoke the substitution command of sed. This has following structure: sed 's/regex/replace/options'. We will search for a regex and replace it using the g option. g makes sure the match is made multiple times per line.
We look for a series of non at chars followed by an # and a number of non-spaces #[^ ]*. We put this last part in a group \(\) and sub it during the replacement \1.
Note that we add a newline at the end of each match, you can also get the output on a single line by omitting the \n.

Using command line to remove text?

I have a huge file that contains lines that follow this format:
New-England-Center-For-Children-L0000392290
Southboro-Housing-Authority-L0000392464
Crew-Star-Inc-L0000391998
Saxony-Ii-Barber-Shop-L0000392491
Test-L0000392334
What I'm trying to do is narrow it down to just this:
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Test
Can anyone help with this?
Using GNU awk:
awk -F\- 'NF--' OFS=\- file
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
Set the input and output field separator to -.
NF contains number of fields. Reduce it by 1 to remove the last field.
Using sed:
sed 's/\(.*\)-.*/\1/' file
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
Simple greedy regex to match up to the last hyphen.
In replacement use the captured group and discard the rest.
Version 1 of the Question
The first version of the input was in the form of HTML and parts had to be removed both before and after the desired text:
$ sed -r 's|.*[A-Z]/([a-zA-Z-]+)-L0.*|\1|' input
Special-Restaurant
Eliot-Cleaning
Kennedy-Plumbing
Version 2 of the Question
In the revised question, it is only necessary to remove the text that starts with -L00:
$ sed 's|-L00.*||' input2
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
Both of these commands use a single "substitute" command. The command has the form s|old|new|.
The perl code for this would be: perl -nle'print $1 if(m{-.*?/(.*?-.*?)-})
We can break the Regex down to matching the following:
- for that's between the city and state
.*? match the smallest set of character(s) that makes the Regex work, i.e. the State
/ matches the slash between the State and the data you want
( starts the capture of the data you are interested in
.*?-.*? will match the data you care about
) will close out the capture
- will match the dash before the L####### to give the regex something to match after your data. This will prevent the minimal Regex from matching 0 characters.
Then the print statement will print out what was captured (your data).
awk likes these things:
$ awk -F[/-] -v OFS="-" '{print $(NF-3), $(NF-2)}' file
Special-Restaurant
Eliot-Cleaning
Kennedy-Plumbing
This sets / and - as possible field separators. Based on them, it prints the last_field-3 and last_field-2 separated by the delimiter -. Note that $NF stands for last parameter, hence $(NF-1) is the penultimate, etc.
This sed is also helpful:
$ sed -r 's#.*/(\w*-\w*)-\w*\.\w*</loc>$#\1#' file
Special-Restaurant
Eliot-Cleaning
Kennedy-Plumbing
It selects the block word-word after a slash / and followed with word.word</loc> + end_of_line. Then, it prints back this block.
Update
Based on your new input, this can make it:
$ sed -r 's/(.*)-L\w*$/\1/' file
New-England-Center-For-Children
Southboro-Housing-Authority
Crew-Star-Inc
Saxony-Ii-Barber-Shop
Test
It selects everything up to the block -L + something + end of line, and prints it back.
You can use also another trick:
rev file | cut -d- -f2- | rev
As what you want is every slice of - separated fields, let's get all of them but last one. How? By reversing the line, getting all of them from the 2nd one and then reversing back.
Here's how I'd do it with Perl:
perl -nle 'm{example[.]com/bp/(.*?)/(.*?)-L\d+[.]htm} && print $2' filename
Note: the original question was matching input lines like this:
<loc>http://www.example.com/bp/Lowell-MA/Special-Restaurant-L0000423916.htm</loc>
<loc>http://www.example.com/bp/Houston-TX/Eliot-Cleaning-L0000422797.htm</loc>
<loc>http://www.example.com/bp/New-Orleans-LA/Kennedy-Plumbing-L0000423121.htm</loc>
The -n option tells Perl to loop over every line of the file (but not print them out).
The -l option adds a newline onto the end of every print
The -e 'perl-code' option executes perl-code for each line of input
The pattern:
/regex/ && print
Will only print if the regex matches. If the regex contains capture parentheses you can refer to the first captured section as $1, the second as $2 etc.
If your regex contains slashes, it may be cleaner to use a different regex delimiter ('m' stands for 'match'):
m{regex} && print
If you have a modern Perl, you can use -E to enable modern feature and use say instead of print to print with a newline appended:
perl -nE 'm{example[.]com/bp/(.*?)/(.*?)-L\d+[.]htm} && say $2' filename
This is very concise in Perl
perl -i.bak -lpe's/-[^-]+$//' myfile
Note that this will modify the input file in-place but will keep a backup of the original data in called myfile.bak

Read word from a file and return next word

Using shell script I want to read a word from text file and return next column word.
For eg, my input file will be like
AGE1 PERSON1
AGE2 PERSON2
AGE3 PERSON3
AGE4 PERSON4
I have variable in Sh file having PERSON's name.
I want read input text file and get value of person's age.
Please help, i'm beginner in Shell Scripting
A slightly simpler solution is:
age=$( awk '$2==name { print $1 }' name="$name" input-file )
Building upon shellter's comment:
age=$(grep "$person_name" people_file.txt | cut -f1 -d' ')
I'll try to explain everything. First, I assume somethings (but you can change them on your script):
Your file with the data you entered is called people_file.txt.
The person's name you want to find is in the variable $person_name.
The variable you want to store the result is $age.
Firstly, because we need to use commands to generate the value of the $age variable, we must use $( and ) to run a command (or a series of commands), and replace itself with the text it captures from executing the command (or commands).
We first need to find the line which contains the person's name. For that we use grep: grep regex file. Grep will search file line by line until it finds a line that matches the regular expression regex. In our case we can simply search for the person's name directly (assuming it doesn't contain special characters, like the period or an asterisk). Note that we must place the variable between double quotes, otherwise a person's name that has a space in it might be split in the command line so that its first name is used as the regular expression and the surname as the file. If you want to search in a case insensitive manner (like for example: John will find a line with JOHN or john), you can use the -i flag: grep -i regex file. The selected lines will be printed by grep into its output, but we will pump those lines into the input of the next command with the pipe operator |.
Finally, we have a line (or many lines) with the results. Now we must extract the age. The cut command will split each line it reads from the input into fields, and only print the fields you ask it to. In this case, we ask for the first field with the -f1 option. Also, we specify that the space character is to be used as the delimeter (ie. the character that separates the fields) with the -d1 command.
If you have more than one line with the same person's name, we need to pipe the output of grep into a head command, so that we can have only the number of lines we want. We can tell head how many lines we want with the -n N option. So if you only want the first match:
age=$(grep "$person_name" people_file.txt | head -n 1 | cut -f1 -d' ')
Hope this helps a little =)
age=`
perl -nle'
BEGIN { $n = shift(#ARGV); }
print $1 if /^(\S+)\s+\Q$n\E$/;
' "$name" file
`
Tested with bash in sh mode.