How to create multiple reg files using linux regex - sed

I have one data file and one reg template file:
data file contain:
c01218 172.20.13.50
c01203 172.20.13.35
c01204 172.20.13.36
c01220 172.20.13.52
c01230 172.20.13.55
reg template:
[HKEY_USERS\S-1-5-21-2000478354-2111687655-1801674531-230160\Software\SimonTatham\PuTTY\Sessions\name]
"Present"=dword:00000001
"HostName"="172.28.130.0"
I want to create loop which to create new reg files from the template with the name from the first colum and to change "name" located in HKEY_USERS also with the first column and to change the IP address with the second column.
For example:
sed -e "s/name/name1/g" -e "s/172.28.130.0/172.28.130.1/g" 1.reg
Expected view after the command:
#cat c01218.reg
[HKEY_USERS\S-1-5-21-2000478354-2111687655-1801674531-230160\Software\SimonTatham\PuTTY\Sessions\c01218]
"Present"=dword:00000001
"HostName"="172.20.13.50"

sed is an excellent tool for simple substitutions on a single line, for anything else just use awk:
awk '{ printf "[HKEY_USERS\\S-1-5-21-2000478354-2111687655-1801674531-230160\\Software\\SimonTatham\\PuTTY\\Sessions\\name]\n\"Present\"=dword:00000001\n\"HostName\"=\"%s\"\n", $2 > $1 }' data
or if you prefer:
awk -v template="\
[HKEY_USERS\\S-1-5-21-2000478354-2111687655-1801674531-230160\\Software\\SimonTatham\\PuTTY\\Sessions\\name]
\"Present\"=dword:00000001
\"HostName\"=\"%s\"
" '{ printf template, $2 > $1 }' data

Try:
$ while read a b; do sed "s/^\"HostName.*$/\"HostName\"=\"$b\"/" template > $a; done < data
A little messy since " has to be used for the shell to expand the variables in the sed-substitution and all additional " need to be escaped.
output:
$ ls
c01203 c01204 c01218 c01220 c01230 data template
$ cat c*
[HKEY_USERS\S-1-5-21-2000478354-2111687655-1801674531-230160\Software\SimonTatham\PuTTY\Sessions\name]
"Present"=dword:00000001
"HostName"="172.20.13.35"
[HKEY_USERS\S-1-5-21-2000478354-2111687655-1801674531-230160\Software\SimonTatham\PuTTY\Sessions\name]
"Present"=dword:00000001
"HostName"="172.20.13.36"
[HKEY_USERS\S-1-5-21-2000478354-2111687655-1801674531-230160\Software\SimonTatham\PuTTY\Sessions\name]
"Present"=dword:00000001
"HostName"="172.20.13.50"
[HKEY_USERS\S-1-5-21-2000478354-2111687655-1801674531-230160\Software\SimonTatham\PuTTY\Sessions\name]
"Present"=dword:00000001
"HostName"="172.20.13.52"
[HKEY_USERS\S-1-5-21-2000478354-2111687655-1801674531-230160\Software\SimonTatham\PuTTY\Sessions\name]
"Present"=dword:00000001
"HostName"="172.20.13.55"

Related

sed replace positional match of unknown string divided by user-defined separator

Want to rename the (known) 3th folder within a (unknown) file path from a string, when positioned on 3th level while separator is /
Need a one-liner explicitly for sed. Because I later want use it for tar --transform=EXPRESSION
string="/db/foo/db/bar/db/folder"
echo "$string" | sed 's,db,databases,'
sed replace "db" only on 3th level
expected result
/db/foo/databases/bar/db/folder
You could use a capturing group to capture /db/foo/ and then match db. Then use use the first caputring group in the replacement using \1:
string="/db/foo/db/bar/db/folder"
echo -e "$string" | sed 's,^\(/[^/]*/[^/]*/\)db,\1databases,'
About the pattern
^ Start of string
\( Start capture group
/[^/]*/[^/]*/ Match the first 2 parts using a negated character class
\) Close capture group
db Match literally
That will give you
/db/foo/databases/bar/db/folder
If awk is also an option for this task:
$ awk 'BEGIN{FS=OFS="/"} $4=="db"{$4="database"} 1' <<<'/db/foo/db/bar/db/folder'
/db/foo/database/bar/db/folder
FS = OFS = "/" assign / to both input and output field separators,
$4 == "db" { $4 = "database }" if fourth field is db, make it database,
1 print the record.
Here is a pure bash way to get this done by setting IFS=/ without calling any external utility:
string="/db/foo/db/bar/db/folder"
string=$(IFS=/; read -a arr <<< "$string"; arr[3]='databases'; echo "${arr[*]}")
echo "$string"
/db/foo/databases/bar/db/folder

Change numbering according to field value by bash script

I have a tab delimited file like this (without the headers and in the example I use the pipe character as delimiter for clarity)
ID1|ID2|VAL1|
1|1|3
1|1|4
1|2|3
1|2|5
2|2|6
I want add a new field to this file that changes whenever ID1 or ID2 change. Like this:
1|1|3|1
1|1|4|1
1|2|3|2
1|2|5|2
2|2|6|3
Is this possible with an one liner in sed,awk, perl etc... or should I use a standard programming language (Java) for this task. Thanks in advance for your time.
Here is an awk
awk -F\| '$1$2!=a {f++} {print $0,f;a=$1$2}' OFS=\| file
1|1|3|1
1|1|4|1
1|2|3|2
1|2|5|2
2|2|6|3
Simple enough with bash, though I'm sure you could figure out a 1-line awk
#!/bin/bash
count=1
while IFS='|' read -r id1 id2 val1; do
#Can remove next 3 lines if you're sure you won't have extraneous whitespace
id1="${id1//[[:space:]]/}"
id2="${id2//[[:space:]]/}"
val1="${val1//[[:space:]]/}"
[[ ( -n $old1 && $old1 -ne $id1 ) || ( -n $old2 && $old2 -ne $id2 ) ]] && ((count+=1))
echo "$id1|$id2|$val1|$count"
old1="$id1" && old2="$id2"
done < file
For example
> cat file
1|1|3
1|1|4
1|2|3
1|2|5
2|2|6
> ./abovescript
1|1|3|1
1|1|4|1
1|2|3|2
1|2|5|2
2|2|6|3
Replace IFS='|' with IFS=$'\t' for tab delimited
Using awk
awk 'FNR>1{print $0 FS (++a[$1$2]=="1"?++i:i)}' FS=\| file

Insert a string/number into a specific cell of a csv file

Basically right now I have a for loop running that runs a series of tests. Once the tests pass I input the results into a csv file:
for (( some statement ))
do
if[[ something ]]
input this value into a specific row and column
fi
done
What I can't figure out right now is how to input a specific value into a specific cell in the csv file. I know in awk you can read a cell with this command:
awk -v "row=2" -F'#' 'NR == row { print $2 }' some.csv and this will print the cell in the 2nd row and 2nd column. I need something similar to this except it can input a value into a specific cell instead of read it. Is there a function that does this?
You can use the following:
awk -v value=$value -v row=$row -v col=$col 'BEGIN{FS=OFS="#"} NR==row {$col=value}1' file
And set the bash values $value, $row and $col. Then you can redirect and move to the original:
awk ... file > new_file && mv new_file file
This && means that just if the first command (awk...) is executed successfully, then the second one will be performed.
Explanation
-v value=$value -v row=$row -v col=$col pass the bash variables to awk. Note value, row and col could be other names, I just used the same as bash to make it easier to understand.
BEGIN{FS=OFS="#"} set the Field Separator and Output Field Separator to be #. The OFS="#" is not necessary here, but can be useful in case you do some print.
NR==row {$col=value} when the number of record (number of line here) is equal to row, then set the col column with value value.
1 perform the default awk action: {print $0}.
Example
$ cat a
hello#how#are#you
i#am#fine#thanks
hoho#haha#hehe
$ row=2
$ col=3
$ value="XXX"
$ awk -v value=$value -v row=$row -v col=$col 'BEGIN{FS=OFS="#"} NR==row {$col=value}1' a
hello#how#are#you
i#am#XXX#thanks
hoho#haha#hehe
Your question has a 'perl' tag so here is a way to do it using Tie::Array::CSV which allows you to treat the CSV file as an array of arrays and use standard array operations:
use strict;
use warnings;
use Tie::Array::CSV;
my $row = 2;
my $col = 3;
my $value = 'value';
my $filename = '/path/to/file.csv';
tie my #file, 'Tie::Array::CSV', $filename, sep_char => '#';
$file[$row][$col] = $value;
untie #file;
using sed
row=2 # define the row number
col=3 # define the column number
value="value" # define the value you need change.
sed "$row s/[^#]\{1,\}/$value/$col" file.csv # use shell variable in sed to find row number first, then replace any word between #, and only replace the nominate column.
# So above sed command is converted to sed "2 s/[^#]\{1,\}/value/3" file.csv
If the above command is fine, and your sed command support the option -i, then run the command to change the content directly in file.csv
sed -i "$row s/[^#]\{1,\}/$value/$col" file.csv
Otherwise, you need export to temp file, and change the name back.
sed "$row s/[^#]\{1,\}/$value/$col" file.csv > temp.csv
mv temp.csv file.csv

Rearrangement rows to columns on linux

I want that rows to rearrangement to columns with spaces between the strings.
q9I6OEFg003411
q9I5IHv5006818
q9I6Gi6P024439
q9I5RoA0019541
Expected view:
q9I6OEFg003411 q9I5IHv5006818 q9I6Gi6P024439 q9I5RoA0019541
You can use tr to translate <newline> to <space>:
tr '\n' ' ' < file
Also in sed:
sed -n '1h;1!H;${g;s/\n/ /g;p}' file
In awk:
awk -vORS=' ' 1 file
If file is small, you can use cat:
echo `cat file`
If you know vim:
:%s/\n/ /
maybe more ...
Using paste command:
paste -sd" " file
d option to set delimiter.

editing text files with perl

I'm trying to edit a text file that looks like this:
TYPE=Ethernet
HWADDR=00:....
IPV6INIT=no
MTU=1500
IPADDR=192.168.2.247
...
(Its actually the /etc/sysconfig/network-scripts/ifcfg- file on red hat Linux)
Instead of reading and rewriting the file each time I want to modify it, I figured I could use grep, sed, awk or the native text parsing functionality provided in Perl.
For instance, if I wanted to change the IPADDR field of the file, is there a way I can just retrieve and modify the line directly? Maybe something like
grep 'IPADDR=' <filename>
but add some additional arguments to modify that line? I'm a little new to UNIX based text processing languages so bear with me...
Thanks!
Here's a Perl oneliner to replace the IPADDR value with the IP address 127.0.01. It's short enough that you should be able to see what you need to modify to alter other fields*:
perl -p -i.orig -e 's/^IPADDR=.*$/IPADDR=127.0.0.1/' filename
It will rename "filename" to "filename.orig", and write out the new version of the file into "filename".
Perl command-line options are explained at perldoc perlrun (thanks for the reminder toolic!), and the syntax of perl regular expressions is at perldoc perlre.
*The regular expression ^IPADDR=.*$, split into components, means:
^ # bind to the beginning of the line
IPADDR= # plain text: match "IPADDR="
.* # followed by any number of any character (`.` means "any one character"; `*` means "any number of them")
$ # bind to the end of the line
since you are on redhat, you can try using the shell
#!/bin/bash
file="file"
read -p "Enter field to change: " field
read -p "Enter new value: " newvalue
shopt -s nocasematch
while IFS="=" read -r f v
do
case "$f" in
$field)
v=$newvalue;;
esac
echo "$f=$v"
done <$file > temp
mv temp file
UPDATE:
file="file"
read -p "Enter field to change: " field
read -p "Enter new value: " newvalue
shopt -s nocasematch
EOL=false
IFS="="
until $EOL
do
read -r f v || EOL=true
case "$f" in
$field)
v=$newvalue;;
esac
echo "$f=$v"
done <$file #> temp
#mv temp file
OR , using just awk
awk 'BEGIN{
printf "Enter field to change: "
getline field < "-"
printf "Enter new value: "
getline newvalue <"-"
IGNORECASE=1
OFS=FS="="
}
field == $1{
$2=newvalue
}
{
print $0 > "temp"
}END{
cmd="mv temp "FILENAME
system(cmd)
}' file
Or with Perl
printf "Enter field: ";
chomp($field=<STDIN>);
printf "Enter new value: ";
chomp($newvalue=<STDIN>);
while (<>){
my ( $f , $v ) = split /=/;
if ( $field =~ /^$f/i){
$v=$newvalue;
}
print join("=",$f,$v);
}
That would be the 'ed' command line editor, like sed but will put the file back where it came from.