sed insert before match include leading newlines - sed

There is a .conf file needed to be edited.
#8889 = 192.168.27.128:22
[incomingudp]
# UDP port forwarding example
I want to insert a line before a match, like this:
#8889 = 192.168.27.128:22
3306 = 192.168.159.128:3306
[incomingudp]
# UDP port forwarding example
But not like this:
#8889 = 192.168.27.128:22
3306 = 192.168.159.128:3306
[incomingudp]
# UDP port forwarding example
by script [NG]:
sed "/\[incomingudp\]/i [Q1]" vmnetnat.conf
Besides, i want to replace the line when the file has a line starts with 3306 =

You could try this :
awk '
/3306 = /{ignore=1}
(FNR==position && !ignore){print "3306 = 192.168.159.128:3306"}
!$0{newline=FNR}
/\[incomingudp\]/&&(FNR==newline+1){position=newline}
NR!=FNR
' <file> <file>
To replace the file, you can do :
awk '
/3306 = /{ignore=1}
(FNR==position && !ignore){print "3306 = 192.168.159.128:3306"}
!$0{newline=FNR}
/\[incomingudp\]/&&(FNR==newline+1){position=newline}
NR!=FNR
' <file> <file> > tmp && mv tmp <file>
Or you could also do :
awk '
/3306 = /{exit 1}
(FNR==position){print "3306 = 192.168.159.128:3306"}
!$0{newline=FNR}
/\[incomingudp\]/&&(FNR==newline+1){position=newline}
NR!=FNR{print $0}
' <file> <file> > tmp && mv tmp <file> || rm tmp

Related

Replace or append in configuration file with sed

I would like to replace or append in a configuration file like sshd_config:
Key1 value
#Key2 value
The idea of the command is:
$ cmd Key1 home file
$ cmd Key2 house file
$ cmd Key3 flat file
So the resulting file is:
Key1 home
Key2 house
Key3 flat
Any help is more than welcome.
I have taken this as an example but the one that comments and uncomments is not properly working.
Besides I have managed with other options but only for comments or uncommented lines and I want everything in one command if possible.
sed '/^Key\s/{h;s/\(\s\).*/\1newvalue/};${x;/^$/{s//Key newvalue/;H};x}' file
This one gets if the Key exists but, how do I append if it doesn't=
sed -i 's/^#\(Key\s\).*/\1newvalue/g' file
Thanks a lot. I have tried to understand sed but it is quite complex the different spaces and I don't know how to get with # or without.
Edit: Stdout output with -i inplace
$ sudo tee -a /usr/local/bin/conf-space-replace-or-append > /dev/null << 'EOL'
#!/bin/bash
awk -i inplace -v key="$1" -v val="$2" '
($1 == key) || ($1 == "#"key) { $0 = key OFS val; done=1 }
{ print }
END { if (!done) print key, val }
' "$3" > /dev/null
EOL
$ sudo chmod +x /usr/local/bin/conf-space-replace-or-append
$ sudo conf-space-replace-or-append Port 22 /etc/ssh/sshd_config
sed is for doing s/old/new on an individual line, that is all. For anything else you should be using awk for clarity, simplicity, portability, efficiency, etc., etc.
Just put the following in a file named cmd and execute it as you show in your question.
awk -v key="$1" -v val="$2" '
($1 == key) || ($1 == "#"key) { next }
{ print }
END { print key, val }
' "$3"
The above will delete the existing key+val if present and always appends the new pair to the end of the file. If you'd rather keep an existing key in it's original position in the file and only add new key+val pairs to the end then that's just a tweak:
awk -v key="$1" -v val="$2" '
($1 == key) || ($1 == "#"key) { $0 = key OFS val; done=1 }
{ print }
END { if (!done) print key, val }
' "$3"

Merge two lines into one within a configuration file

I have several AIX systems with a configuration file, let's call it /etc/bar/config. The file may or may not have a line declaring values for foo. An example would be:
foo = A_1,GROUP_1,USER_1,USER_2,USER_3
The foo line may or may not be the same on all systems. Different systems may have different values and different a different number of values. My task is to add "bare minimum" values in the config file on all systems. The bare minimum line will look like this.
foo = A_1,USER_1,SYS_1,SYS_2
If the line does not exist, I must create it. If the line does exist, I must merge the two lines. Using my examples, the result would be this. The order of the values does not matter.
foo = A_1,GROUP_1,USER_1,USER_3,USER_2,SYS_1,SYS_2
Obviously I want a script to do my work. I have the standard sh, ksh, awk, sed, grep, perl, cut, etc. Since this is AIX, I do not have access to the GNU versions of these utilities.
Originally, I had a script with these commands to replace the entire foo line.
cp /etc/bar/config /etc/bar/config.$$
sed "s/foo = .*/foo = A_1,USER_1,SYS_1,SYS_2/" /etc/bar/config.$$ > /etc/bar/config
But this simply replaces the line. It does take into consideration any pre-existing configuration, including a line that's missing. And I'm doing other configuration modifications in the script, such as adding completely unique lines to other files and restarting a process, so I'd perfer this be some type of shell-based code snippet I can add to my change script. I am open to other options, especially if the solution is simpler.
Some dirty bash/sed:
#!/usr/bin/bash
input_file="some_filename"
v=$(grep -n '^foo *=' "$input_file")
lineno=$(cut -d: -f1 <<< "${v}0:")
base="A_1,USER_1,SYS_1,SYS_2,"
if [[ "$lineno" == 0 ]]; then
echo "foo = A_1,USER_1,SYS_1,SYS_2" >> "$input_file"
else
all=$(sed -n ${lineno}'s/^foo *= */'"$base"'/p' "$input_file" | \
tr ',' '\n' | sort | uniq | tr '\n' ',' | \
sed -e 's/^/foo = /' -e 's/, *$//' -e 's/ */ /g' <<< "$all")
sed -i "${lineno}"'s/.*/'"$all"'/' "$input_file"
fi
Untested bash, etc.
config=/etc/bar/config
default=A_1,USER_1,SYS_1,SYS_2
pattern='^foo[[:blank:]]*=[[:blank:]]*' # shared with grep and sed
if current=$( grep "$pattern" "$config" | sed "s/$pattern//" )
then
new=$( echo "$current,$default" | tr ',' '\n' | sort | uniq | paste -sd, )
sed "s/$pattern.*/foo = $new/" "$config" > "$config.$$.tmp" &&
mv "$config.$$.tmp" "$config"
else
echo "foo = $default" >> "$config"
fi
A vanilla perl solution:
perl -i -lpe '
BEGIN {%foo = map {$_ => 1} qw/A_1 USER_1 SYS_1 SYS_2/}
if (s/^foo\s*=\s*//) {
$found=1;
$foo{$_}=1 for split /,/;
$_ = "foo = " . join(",", keys %foo);
}
END {print "foo = " . join(",", keys %foo) unless $found}
' /etc/bar/config
This Perl code will do as you ask. It expects the path to the file to be modified as a parameter on the command line.
Note that it reads the entire input file into the array #config and then overwrites the same file with the modified data.
It works by building a hash %values from a combination of the items already present in the foo = line and the list of defaults items in #defaults. The combination is sorted in alphabetical order and joined eith a comma
use strict;
use warnings;
my #defaults = qw/ A_1 USER_1 SYS_1 SYS_2 /;
my ($file) = #ARGV;
my #config = <>;
open my $out_fh, '>', $file or die $!;
select $out_fh;
for ( #config ) {
if ( my ($pfx, $vals) = /^(foo \s* = \s* ) (.+) /x ) {
my %values;
++$values{$_} for $vals =~ /[^,\s]+/g;
++$values{$_} for #defaults;
print $pfx, join(',', sort keys %values), "\n";
}
else {
print;
}
}
close $out_fh;
output
foo = A_1,GROUP_1,SYS_1,SYS_2,USER_1,USER_2,USER_3
Since you didn't provide sample input and expected output I couldn't test this but this is the right approach:
awk '
/foo = / { old = ","$3; next }
{ print }
END {
split("A_1,USER_1,SYS_1,SYS_2"old,all,/,/)
for (i in all)
if (!seen[all[i]]++)
new = (new ? new "," : "") all[i]
print "foo =", new
}
' /etc/bar/config > tmp && mv tmp /etc/bar/config

Make some replacements on a bunch of files depending the number of columns per line

I'm having a problem dealing with some files. I need to perform a column count for every line in a file and depending the number of columns i need to add severals ',' in in the end of each line. All lines should have 36 columns separated by ','
This line solves my problem, but how do I run it in a folder with several files in a automated way?
awk ' BEGIN { FS = "," } ;
{if (NF == 32) { print $0",,,," } else if (NF==31) { print $0",,,,," }
}' <SOURCE_FILE> > <DESTINATION_FILE>
Thank you for all your support
R&P
The answer depends on your OS, which you haven't told us. On UNIX and assuming you want to modify each original file, it'd be:
for file in *
do
awk '...' "$file" > tmp$$ && mv tmp$$ "$file"
done
Also, in general to get all records in a file to have the same number of fields you can do this without needing to specify what that number of fields is (though you can if appropriate):
$ cat tst.awk
BEGIN { FS=OFS=","; ARGV[ARGC++] = ARGV[ARGC-1] }
NR==FNR { nf = (NF > nf ? NF : nf); next }
{
tail = sprintf("%*s",nf-NF,"")
gsub(/ /,OFS,tail)
print $0 tail
}
$
$ cat file
a,b,c
a,b
a,b,c,d,e
$
$ awk -f tst.awk file
a,b,c,,
a,b,,,
a,b,c,d,e
$
$ awk -v nf=10 -f tst.awk file
a,b,c,,,,,,,
a,b,,,,,,,,
a,b,c,d,e,,,,,
It's a short one-liner with Perl:
perl -i.bak -F, -alpe '$_ .= "," x (36-#F)' *
if this is only a single folder without subfolders, use:
for oldfile in /path/to/files/*
do
newfile="${oldfile}.new"
awk '...' "${oldfile}" > "${newfile}"
done
if you also want to include subdirectories recursively, it's probably easiest to put the awk+redirection into a small shell-script, like this:
#!/bin/bash
oldfile=$1
newfile="${oldfile}.new"
awk '...' "${oldfile}" > "${newfile}"
and then run this script (let's calls it runawk.sh) via find:
find /path/to/files/ -type f -not -name "*.new" -exec runawk.sh \{\} \;

Print all line between the search pattern into different files using perl or any method

Could someone help out on this
I want to print all line between the search pattern (START & END) to different files (new_file_name can be any incremental name provided)
But the search pattern repeats in file hence each time it finds the pattern it should dump the line b/w them into different files
The file is something like this
START --- ./body1/b1
##########################
123body1
abcbody1
##########################
END --- ./body1/b1
START --- ./body2/b2
##########################
123body2
defbody2
##########################
END --- ./body2/b2
perl solution,
perl -MFile::Basename -MFile::Path -ne '
($a) = /^START.+?(\S+)$/;
$b = /^END/;
$a..$b or next;
if ($a){ mkpath(dirname $a); open STDOUT,">",$a; }
$a||$b or print;
' file
Here is my awk solution:
# print_between_patterns.awk
/^START/ { filename = $NF ; next } # On START, use the last field as file name
/^END/ { next } # On END, skip
{ print > filename } # For the rest of the lines, print to file
Assume your data file is called data.txt, the following will do what you want:
awk -f print_between_patterns.awk data.txt
Discussion
After the script ran, you will have ./body1, ./body2, and so on.
If you don't want to skip the BEGIN and END parts, remove the next commands.
Update
If you want to control the output filename in a sequential way:
/^START/ { filename = sprintf("out%04d.txt", ++count) ; next }
/^END/ { next }
{ print > filename }
To get automatically generated incremental file names:
awk '
/^END/ { inBlock=0 }
inBlock { print > outfile }
/^START/ { inBlock=1; outfile = "outfile" ++count }
' file
To use the file names from your input:
awk '
/^END/ { inBlock=0 }
inBlock { print > outfile }
/^START/ {
inBlock=1
outdir = outfile = $NF
sub(/\/[^\/]+$/,"",outdir)
system("mkdir -p \"" outdir "\"")
}
' file
The problem #JamesBond was having below was that I wasn't escaping the "/" within the character list in the sub() so I've updated my answer above to do that now. There's absolutely no reason why that should need to be escaped but apparently both nawk and /usr/xpg4/bin/awk require it:
$ cat file
the
quick/brown
dog
$ gawk '/[/]/' file
quick/brown
$ nawk '/[/]/' file
nawk: nonterminated character class [
source line number 1
context is
>>> /[/ <<< ]/
$ /usr/xpg4/bin/awk '/[/]/' file
/usr/xpg4/bin/awk: /[/: [ ] imbalance or syntax error Context is:
>>> /[/ <<<
and gawk doesn't care either way:
$ gawk --lint --posix '/[/]/' file
quick/brown
$ gawk --lint '/[/]/' file
quick/brown
$ gawk --lint --posix '/[\/]/' file
quick/brown
$ gawk --lint '/[\/]/' file
quick/brown
They all work just fine if I escape the backslash without putting it in a character list:
$ /usr/xpg4/bin/awk '/\//' file
quick/brown
$ nawk '/\//' file
quick/brown
$ gawk '/\//' file
quick/brown
So I guess that's something worth remembering for portability in future!
Using awk:
awk 'sub(/^START/, ""){out=sprintf("out%d", c++); p=1}
sub(/^END/, ""){print > out; p=0} p{print > out}' file
This will find and store each match between START and END into separate files named out1, out2 etc.
This is one way to do it in Bash.
#!/bin/bash
[ -n "$BASH_VERSION" ] || {
echo "You need Bash to run this script."
exit 1
}
shopt -s extglob || {
echo "Unable to enable extglob shell option."
exit 1
}
IFS=$' \t\n' ## Use default.
while read KEY DASH FILENAME; do
if [[ $KEY == START && $DASH == --- && -n $FILENAME ]]; then
CURRENT_FILENAME=$FILENAME
DIRNAME=${FILENAME%%+([^/])}
if [[ -n $DIRNAME ]]; then
mkdir -p "$DIRNAME" || {
echo "Unable to create directory $DIRNAME."
exit 1
}
fi
exec 4>"$CURRENT_FILENAME" || {
echo "Unable to open $CURRENT_FILENAME for output."
exit 1
}
for (( ;; )); do
IFS= read -r LINE || {
echo "End of file reached finding END block of $CURRENT_FILENAME."
exec 4>&-
exit 1
}
read -r KEY DASH FILENAME <<< "$LINE"
if [[ $KEY == END && $DASH == --- && $FILENAME == "$CURRENT_FILENAME" ]]; then
break
else
echo "$LINE" >&4
fi
done
exec 4>&-
fi
done
Make sure you save the script in UNIX file format then run it as bash script.sh < file.
I guess you need to see this.
perl -lne 'print if((/START/../END/) and ($_!~/START/ and $_!~/END/))' your_file
Tested below:
> cat temp
START --- ./body1
##########################
123body1
abcbody1
##########################
END --- ./body1
START --- ./body2
##########################
123body2
defbody2
##########################
END --- ./body2
> perl -lne 'print if((/START/../END/) and ($_!~/START/ and $_!~/END/))' temp
##########################
123body1
abcbody1
##########################
##########################
123body2
defbody2
##########################
>
This might work for you:
csplit -z file '/^START/' '{*}'
Files will be named xx00 xx01 xx..

Exclude e-mails which domain name match with the global one

The global domain are in "*#" option, when e-mail match with one of these global domains, I need to exclude them from the list.
Example:
WF,*#stackoverflow.com
WF,*#superuser.com
WF,*#stackexchange.com
WF,test#superuser.com
WF,test#stackapps.com
WF,test#stackexchange.com
Output:
WF,*#stackoverflow.com
WF,*#superuser.com
WF,*#stackexchange.com
WF,test#stackapps.com
You have two types of data in the same file, so the easiest way to process is to divide it first:
<infile tee >(grep '\*#' > global) >(grep -v '\*#' > addr) > /dev/null
Then use global to remove information from addr:
grep -vf <(cut -d# -f2 global) addr
Putting it together:
<infile tee >(grep '\*#' > global) >(grep -v '\*#' > addr) > /dev/null
cat global <(grep -vf <(cut -d# -f2 global) addr) > outfile
Contents of outfile:
WF,*#stackoverflow.com
WF,*#superuser.com
WF,*#stackexchange.com
WF,test#stackapps.com
Clean up temporary files with rm global addr.
$ awk -F, 'NR==FNR && /\*#/{a[substr($2,3)]=1;print;next}NR!=FNR && $2 !~ /^\*/{x=$2;sub(/.*#/,"",x); if (!(x in a))print;}' OFS=, file file
WF,*#stackoverflow.com
WF,*#superuser.com
WF,*#stackexchange.com
WF,test#stackapps.com
You could do:
grep -o "\*#.*" file.txt | sed -e 's/^/[^*]/' > global.txt
grep -vf global.txt file.txt
This will start by extracting the global emails, and prepend them with [^*], saving the results into global.txt. This file is then used as input to grep, where each line is treated as a regex in the form [^*]*#global.domain.com. The -v option tells grep to only print lines that don't match that pattern.
Another analogous option, using sed for in-place editing would be:
grep -o "\*#.*" file.txt | sed -e 's/^.*$/\/[^*]&\/d/' > global.sed
sed -i -f global.sed file.txt
Here's one way using GNU awk. Run like:
awk -f script.awk file.txt{,}
Contents of script.awk:
BEGIN {
FS=","
}
FNR==NR {
if (substr($NF,1,1) == "*") {
array[substr($NF,2)]++
}
next
}
substr($NF,1,1) == "*" || !(substr($NF,index($NF,"#")) in array)
Results:
WF,*#stackoverflow.com
WF,*#superuser.com
WF,*#stackexchange.com
WF,test#stackapps.com
Alternatively, here's the one-liner:
awk -F, 'FNR==NR { if (substr($NF,1,1) == "*") array[substr($NF,2)]++; next } substr($NF,1,1) == "*" || !(substr($NF,index($NF,"#")) in array)' file.txt{,}
This might work for you (GNU sed):
sed '/.*\*\(#.*\)/!d;s||/[^*]\1/d|' file | sed -f - file
With one pass of the file and allowing for the global domains to be intermixed with the addresses:
$ cat file
WF,*#stackoverflow.com
WF,test#superuser.com
WF,*#superuser.com
WF,test#stackapps.com
WF,test#stackexchange.com
WF,*#stackexchange.com
WF,foo#stackapps.com
$
$ awk -F'[,#]' '
$2=="*" { glbl[$3]; print; next }
{ addrs[$3] = addrs[$3] $0 ORS }
END {
for (dom in addrs)
if (!(dom in glbl))
printf "%s",addrs[dom]
}
' file
WF,*#stackoverflow.com
WF,*#superuser.com
WF,*#stackexchange.com
WF,test#stackapps.com
WF,foo#stackapps.com
or if you don't mind a 2-pass approach:
$ awk -F'[,#]' '(NR==FNR && $2=="*" && !glbl[$3]++) || (NR!=FNR && !($3 in glbl))' file file
WF,*#stackoverflow.com
WF,*#superuser.com
WF,*#stackexchange.com
WF,test#stackapps.com
WF,foo#stackapps.com
I know that second one's a bit cryptic, but it's pretty easily translated to not use the default action and a good exercise in awk idioms :-).