sed or awk to change a specific number in a file on RHEL7 - sed

I need help figuring out the syntax or what command to use to find an replace a specific number in a file.
I need to replace the number 10 with 25 in a configuration file. I have tried the following:
sed 's/10/25/g' /etc/security/limits.conf
This changes other instances that contain 10 such as 1000 and 10000 to 2500 and 25000, I need to juct change the need to just change 10 to 25. Please help.
Thank you,
Joseph

The trick here is to limit the sed substitution to the line you want to change. For limits.conf you are best off matching the domain, type and item. So if you wanted to just change a limit for domain #student, type hard, item nproc, you'd use something like
sed '/#student.*hard.*nproc/s/10/25/g' /etc/security/limits.conf

sed -ri '/^#/!s/(^.*)([[:space:]]10$)/\1 25/' /etc/security/limits.conf
With regular expression interpretation enabled (-r or -E), process all lines that don't start with a # by using ! We then split the lines into two sections, and replace the line for the first section followed by a space and 25. The $ ensure that the entry to replace is anchored at the end of the line.
Awk is another option:
awk -i 'NF==4 && $4==10 { gsub("10","25",$4) }1' /etc/security/limits.conf
Check if the line has 4 space delimited fields (NF==4) and the 4th field ($4) is 10. If this condition is met, replace 10 with 25 using gsub and print all lines with 1
The -i is an inplace amend flag on more recent versions of awk. If a compliant version is not available, use:
awk 'NF==4 && $4==10 { gsub("10","25",$4) }1' /etc/security/limits.conf > /etc/security/limits.tmp && mv -f /etc/security/limits.tmp /etc/security/limits.conf

Use this Perl one-liner, where \b stands for word break (so that 10 will not match 210 or 102):
perl -pe 's/\b10\b/25/g' in_file > out_file
Or to change the file in-place:
perl -i.bak -pe 's/\b10\b/25/g' in_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-p : Loop over the input one line at a time, assigning it to $_ by default. Add print $_ after each loop iteration.
-i.bak : Edit input files in-place (overwrite the input file). Before overwriting, save a backup copy of the original file by appending to its name the extension .bak.
The regex uses modifier /g : Match the pattern repeatedly.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlrequick: Perl regular expressions quick start

Related

Deleting lines between two characters using sed

I have multiple datasets in txt format which have a predictable content. I am trying to remove the first set of lines. The first line starts with >*chromosome and I want to delete everything until >*plasmid. I can either tell it to delete everything from > until it encounters it again or delete everything between the first > and the second >. I have been trying something like this:
sed -i.bak '/>/,/^\>*$/{d}' file.txt
This did not work the original code I found was:
sed -i.bak '/>/,/^\s*$/{d}' file.txt
Use this Perl one-liner:
perl -0777 -pe 's{^>chromosome.*(?=^>plasmid)}{}sm' in.fasta
EXAMPLE:
# Create example input file:
cat > in.fasta <<EOF
>foo
TCGA
>chromosome
ACGT
>plasmid
CGTA
EOF
perl -0777 -pe 's{^>chromosome.*(?=^>plasmid)}{}sm' in.fasta > out.fasta
Output in out.fasta:
>foo
TCGA
>plasmid
CGTA
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-p : Loop over the input one line at a time, assigning it to $_ by default. Add print $_ after each loop iteration.
-0777 : Slurp files whole.
The regex uses these modifiers:
/m : Allow multiline matches.
/s : Allow . to match a newline.
^>chromosome.*(?=^>plasmid) : Regex that matches >chromosome starts starts at the beginning of the line, followed by 0 or more characters, and ending right at (but not including) the match to >plasmid at the beginning of the line. The expression (?=PATTERN) is zero-length positive lookahead.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlre: Perl regular expressions (regexes)
perldoc perlre: Perl regular expressions (regexes): Quantifiers; Character Classes and other Special Escapes; Assertions; Capture groups
perldoc perlrequick: Perl regular expressions quick start

How to increment a number at the end of a line with bash?

I have a text file that looks like this:
qwerty=1.8
asdfg=15.9
zxcvb=144.99
I managed to replace a specific version with another specific version using sed:
sed s/asdfg=15.9/asdfg=15.10/ file
But how can i make it dynamic? My end goal is a command that i can use with argument "asdfg" and it will update the line asdfg=15.9 to asdfg=15.10 without me having to know the version.
With GNU awk:
$ # adds 1 to entire number after =
$ awk 'match($0, /(asdfg=)(.+)/, m){$0 = m[1] m[2]+1} 1' file
qwerty=1.8
asdfg=16.9
zxcvb=144.99
$ # adds 1 after the decimal point
$ awk 'match($0, /(asdfg=[0-9]+\.)(.+)/, m){$0 = m[1] m[2]+1} 1' file
qwerty=1.8
asdfg=15.10
zxcvb=144.99
Here match is used to separate out the prefix string and the number to be incremented. The results are available from m array.
With perl
$ perl -pe 's/asdfg=\K.+/$&+1/e' file
qwerty=1.8
asdfg=16.9
zxcvb=144.99
The e flag allows you to use Perl code in replacement section. \K is used here to avoid asdfg= showing up in matched portion. $& will have the matched portion, which is the number after asdfg= in this case.
To change only after the decimal point:
$ perl -pe 's/asdfg=\d*\.\K.+/$&+1/e' ip.txt
qwerty=1.8
asdfg=15.10
zxcvb=144.99
Use perl -i -pe to write the changes back to file. Use -i.bkp to create backups.

Require exactly 1 trailing newline in a file

I have a mix of files with various ways of using trailing new lines. There are no carriage returns, it's only \n. Some files have multiple newlines and some files have no trailing newline. I want to edit the files in place.
How can I edit the files to have exactly 1 trailing newline?
To change text files in-place to have one and only one trailing newline:
sed -zi 's/\n*$/\n/'
This requires GNU sed.
-z tells sed to read in the file using the NUL character as a separator. Since text files have no NUL characters, this has the effect of reading the whole file in at once.
-i tells GNU sed to change the file in place.
s/\n*$/\n/ tells sed to replace however many newlines there are at the end of the file with a single newline.
Replace all trailing new lines with one?
$text =~ s/\n+$/\n/;
This leaves the file with one newline at the end – if it had at least one to start with. If you want it to be there even if the file didn't have one, replace \n+ with \n*.
For the in-place specification, implying a one-liner:
perl -i -0777 -wpe 's/\n+$/\n/' file.txt
The meaning of the switches is explained in Command Switches in perlrun.
Here is a summary of the switches. Please see the above docs for precise explanations.
-i changes the file "in place." Note that data is still copied and temporary files used
-0777 reads the file whole. The -0[oct|hex] sets $/ to the number, so to nul with -0
-w uses warnigs. Not exactly the same as use warnings but better than nothing
-p the code in '' runs on each line of file in turn, like -n, and then $_ is printed
-e what follows between '' is executed as Perl code
-E is the same but also enables features, likesay
Note that we can see the equivalent code by using core O and B::Deparse modules as
perl -MO=Deparse -wp -e 1
This prints
BEGIN { $^W = 1; }
LINE: while (defined($_ = <ARGV>)) {
'???';
}
continue {
print $_;
}
-e syntax OK
showing a script equivalent to the one liner with -w and -p.
perl -i -0 -pe 's/\n\n*$/\n/' input-file
The solutions posted so far read your whole input file into memory which will be an issue if your file is huge. This only reads contiguous empty lines into memory:
awk -i inplace '/./{printf "%s", buf; buf=""; print; next} {buf = buf $0 ORS}' file
The above uses GNU awk for inplace editing.

Add text at the end of each line

I'm on Linux command line and I have file with
127.0.0.1
128.0.0.0
121.121.33.111
I want
127.0.0.1:80
128.0.0.0:80
121.121.33.111:80
I remember my colleagues were using sed for that, but after reading sed manual still not clear how to do it on command line?
You could try using something like:
sed -n 's/$/:80/' ips.txt > new-ips.txt
Provided that your file format is just as you have described in your question.
The s/// substitution command matches (finds) the end of each line in your file (using the $ character) and then appends (replaces) the :80 to the end of each line. The ips.txt file is your input file... and new-ips.txt is your newly-created file (the final result of your changes.)
Also, if you have a list of IP numbers that happen to have port numbers attached already, (as noted by Vlad and as given by aragaer,) you could try using something like:
sed '/:[0-9]*$/ ! s/$/:80/' ips.txt > new-ips.txt
So, for example, if your input file looked something like this (note the :80):
127.0.0.1
128.0.0.0:80
121.121.33.111
The final result would look something like this:
127.0.0.1:80
128.0.0.0:80
121.121.33.111:80
Concise version of the sed command:
sed -i s/$/:80/ file.txt
Explanation:
sed stream editor
-i in-place (edit file in place)
s substitution command
/replacement_from_reg_exp/replacement_to_text/ statement
$ matches the end of line (replacement_from_reg_exp)
:80 text you want to add at the end of every line (replacement_to_text)
file.txt the file name
How can this be achieved without modifying the original file?
If you want to leave the original file unchanged and have the results in another file, then give up -i option and add the redirection (>) to another file:
sed s/$/:80/ file.txt > another_file.txt
sed 's/.*/&:80/' abcd.txt >abcde.txt
If you'd like to add text at the end of each line in-place (in the same file), you can use -i parameter, for example:
sed -i'.bak' 's/$/:80/' foo.txt
However -i option is non-standard Unix extension and may not be available on all operating systems.
So you can consider using ex (which is equivalent to vi -e/vim -e):
ex +"%s/$/:80/g" -cwq foo.txt
which will add :80 to each line, but sometimes it can append it to blank lines.
So better method is to check if the line actually contain any number, and then append it, for example:
ex +"g/[0-9]/s/$/:80/g" -cwq foo.txt
If the file has more complex format, consider using proper regex, instead of [0-9].
You can also achieve this using the backreference technique
sed -i.bak 's/\(.*\)/\1:80/' foo.txt
You can also use with awk like this
awk '{print $0":80"}' foo.txt > tmp && mv tmp foo.txt
Using a text editor, check for ^M (control-M, or carriage return) at the end of each line. You will need to remove them first, then append the additional text at the end of the line.
sed -i 's|^M||g' ips.txt
sed -i 's|$|:80|g' ips.txt
sed -i 's/$/,/g' foo.txt
I do this quite often to add a comma to the end of an output so I can just easily copy and paste it into a Python(or your fav lang) array

sed + remove "#" and empty lines with one sed command

how to remove comment lines (as # bal bla ) and empty lines (lines without charecters) from file with one sed command?
THX
lidia
If you're worried about starting two sed processes in a pipeline for performance reasons, you probably shouldn't be, it's still very efficient. But based on your comment that you want to do in-place editing, you can still do that with distinct commands (sed commands rather than invocations of sed itself).
You can either use multiple -e arguments or separate commands with a semicolon, something like (just one of these, not both):
sed -i 's/#.*$//' -e '/^$/d' fileName
sed -i 's/#.*$//;/^$/d' fileName
The following transcript shows this in action:
pax> printf 'Line # with a comment\n\n# Line with only a comment\n' >file
pax> cat file
Line # with a comment
# Line with only a comment
pax> cp file filex ; sed -i 's/#.*$//;/^$/d' filex ; cat filex
Line
pax> cp file filex ; sed -i -e 's/#.*$//' -e '/^$/d' filex ; cat filex
Line
Note how the file is modified in-place even with two -e options. You can see that both commands are executed on each line. The line with a comment first has the comment removed then all is removed because it's empty.
In addition, the original empty line is also removed.
#paxdiablo has a good answer but it can be improved.
(1) The '/^$/d' clause only matches 100% blank lines.
If you want to also match lines that are entirely whitespace (spaces, tabs etc.) use this instead:
'/^\s*$/d'
(2) The 's/#.*$//' clause only matches lines that start with the # character in column 0.
If you want to also match lines that have only whitespace before the first # use this instead:
'/^\s*#.*$/d'
The above criteria may not be universal (e.g. within a HEREDOC block, or in a Python multi-line string the different approaches could be significant), but in many cases the conventional definition of "blank" lines include whitespace-only, and "comment" lines include whitespace-then-#.
(3) Lastly, on OSX at least, the #paxdiablo solution in which the first clause turns comment lines into blank lines, and the second clause strips blank lines (including what were originally comments) doesn't work. It seems to be more portable to make both clauses /d delete actions as I've done.
The revised command incorporating the above is:
sed -e '/^\s*#.*$/d' -e '/^\s*$/d' inputFile
This tiny jewel removes all # comments, no matter where they begin in a line (see caution below):
sed -e 's/\s*#.*$//'
Example:
text="
this is a # test
#this is a test
#this is a #test
this is # another #test
"
$echo "$text" | sed -e 's/\s*#.*$//'
this is a
this is
Next this removes any resulting blank lines:
$echo "$text" | sed -e 's/\s*#.*$//' | sed -e '/^\s*$/d'
Caution: Depending on the syntax and/or interpretation of the lines your processing, this might not be an appropriate solution, as it just stupidly removes end of lines, even if the '#' is part of your data or code. However, for use cases where you'll never use a hash except for as an end of line comment then it works fine. So just as with all coding, context must be taken into consideration.
Alternative variant, using grep:
cat file.txt | grep -Ev '(#.*$)|(^$)'
you can use awk
awk 'NF{gsub(/^[ \t]*#/,"");print}' file
First example(paxdiablo) is very good except its not change file, just output result. If you want to change it inline:
sudo sed -i 's/#.*$//;/^$/d' inputFile
On (one of) my linux boxes, sed understands extended regular expressions with the -r option, so:
sed -r '/(^\s*#)|(^\s*$)/d' squid.conf.installed
is very useful for showing all non-blank, non comment lines.
The regex matches either start of line followed by zero or more spaces or tabs followed by either a hash or end of line, and deletes those matching lines from the input.