I'm trying to adapt an script to make it work in an aix server. The scripts must replace a line which contains a pattern with other line with the same pattern but adding more info.
Following my previous question here:
I want to replace the line which contain the pattern CASIOPEA_STORE_BDD_PWD, here the code which still doesn't work :
sed -i 's/^.*\bCASIOPEA_STORE_BDD_PWD\b.*$/CASIOPEA_STORE_BDD_PWD='MyCasioPass2014#'/g' casiopeia.conf
Now i'm trying the script on OS X and this command throws me the following error message:
sed: 1: "File's route ...": invalid command code m
Q2: Is it possible to add this line to the file casiopeia.conf if it isn't exists in the file ?
I don't think in place editing (-i) flag of sed works on AIX:
You could try:
sed "s/^.*\bCASIOPEA_STORE_BDD_PWD\b.*$/CASIOPEA_STORE_BDD_PWD=\'MyCasioPass2014\'/g" casiopeia.conf >casiopeia.edit
Or install the GNU version of sed on AIX
For Q2:
#!/bin/ksh
search="CASIOPEA_STORE_BDD_PWD"
replace="CASIOPEA_STORE_BDD_PWD=\'MyCasioPass2014\'"
filename="$1"
#Grep for search pattern
grep q "$search" "$filename"
if [ $? -ne -0 ]; then
#Match not found, append line
echo "$search" >> "$filename"
else
#Match found, create tmp file for inplace editing
tmpfile=$(mktemp "/tmp/$filename.XXXXX")
#Copy orignal to tmpfile
cat $filename >$tmpfile
#Sed line non-case sensitive
sed "s/^.*$search.*$/$replace/g" "$tmpfile" >"$filename"
#Remove temp file
rm "$tmpfile"
fi
Related
I'm having trouble in getting sed to remove just the specific line I want. Let's say I have a file that looks like this:
testfile
testfile.txt
testfile2
Currently I'm using this to remove the line I want:
sed -i "/$1/d" file
The issue is that with this if I were to give testfile as input it would delete all three lines but I want it to only remove the first line. How do I do this?
With grep
grep -x -F -v -- "$1" file
# or
grep -xFv -- "$1" file
-F is for "fixed strings" -- turns off regex engine.
-x is to match entire line.
-v is for "everything but" the matched line(s).
-- to signal the end of options, in case $1 starts with a hyphen.
To save the file
grep -xFv -- "$1" file | sponge file # `moreutils` package
# or
tmp=$(mktemp)
grep -xFv -- "$1" file > "$tmp" && mv "$tmp" file
So match the whole line.
var=testfile
sed -i '/^'"$var"'$/d' file
# or with " quoting
sed -i "/^$var\$/d" file
You can learn regex with fun online with regex crosswords.
I have a folder of 500 *.INI files that I need to manually edit. Within each INI file, I have the line Source =. I would like that line to become Source = C:\software\{filename}.
For instance, a dx4.ini file would need to be fixed to become: Source = C:\software\dx4
Is there a quick way to do this with Find, Grep, or Sed functions?
You can try with sed
For example
Input file contents:
file.txt
Source =
some lines..
script:
newstring='Source = C:\software\dx4'
oldstring='Source ='
echo `sed "s/$oldstring/$newstring/g" file.txt` > file.txt
After running the above commands
output:
Source = C:\software\dx4
some lines..
If you want to edit a file in a script, I think ed is the way to go. Combined with a shell for loop:
for file in *.INI; do
base=$(basename "$file" .INI)
ed -s "$file" <<EOF
/^Source =/s/=/= C:\\\\software\\\\$base/
w
EOF
done
(This does assume that filenames will not have newlines or ampersands in their names)
With GNU awk for the 3rd arg to match(), gensub(), and "inplace" editing:
awk -i inplace '
match($0,/(.*Source = C:\\software\\){filename}(.*)/,a) {
fname = gensub(/\..*/,"",1,FILENAME)
$0 = a[1] fname a[2]
}
1' *.INI
The above assumes you're running in a UNIX environment though your use of the term folder instead of directory and that path starting with C: and containing backslashes makes me suspicious. If you're on Windows then save the part between the 2 's (exclusive) in a file named foo.awk and execute it as awk -i inplace foo.awk *.INI or however it is you normally execute commands like this in Windows.
find *.ini -type -f > stack
while read line
do
sed -i s"#Source =#Source = C:\\software\\dx4#" "${line}"
done < stack
Assuming that a} You have sed with "-i" (the insert flag, which AFAIK is not always portable) and b} sed doesn't crap itself about a double escape sequence, I think that will work.
I need a help in the shell scripting processing the file. The script should read each file in the path and replace the string in each row.
It should read each line and replace the 7th column with XXXX mentioned in the sample output. Any help in appreciated.
Input file data
"2013-04-30"|"X"|"0000628"|"15000231"|"1999-12-05"|"ST"|"2455525445552000"|"1111-11-11"|75.00|"XXE11111"|"224425"
"2013-04-30"|"Y"|"0000928"|"95000232"|"1999-12-05"|"VT"|"2455525445552000"|"1111-11-11"|95.00|"VVE11111"|"224425"
output file
"2013-04-30"|"X"|"0000628"|"15000231"|"1999-12-05"|"ST"|"24555XXXXXXXXXX"|"1111-11-11"|75.00|"XXE11111"|"224425"
"2013-04-30"|"Y"|"0000928"|"95000232"|"1999-12-05"|"VT"|"24555XXXXXXXXXX"|"1111-11-11"|95.00|"VVE11111"|"224425"
Script I used to run but it is not editing the input file
FILES=/home/auto/*.txt
for f in $FILES
do
echo "Processing $f file..."
cat $f | awk 'BEGIN {FS="|"; OFS="|"} {$7=substr($7, 1, 6)"XXXXXXXXXX\"";print}'
done
but I can't edit the exiting file in the directory. I need to use the sed -i option but it's not working.
I tried using the script in below server but I am getting the following error.
SunOS 5.10 Generic January 2005
echo "hello"
FILES=/export/home/*.txt
for f in $FILES
do
echo "Processing $f file..."
sed -i -r 's/"([^"]{6})[^"]*"/"\1XXXXXXXXXX"/6' "$f"
done
I get
sed: illegal option -- i
Using GNU sed with -i optoin
sed -i -r 's/"([^"]{5})[^"]*"/"\1XXXXXXXXXX"/5' file
"2013-04-30"|"X"|"0000628"|"15000231"|"1999-12-05"|"ST"|"24555XXXXXXXXXX"|"1111-11-11"|75.00|"XXE11111"|"224425"
"2013-04-30"|"Y"|"0000928"|"95000232"|"1999-12-05"|"VT"|"24555XXXXXXXXXX"|"1111-11-11"|95.00|"VVE11111"|"224425"
if your awk is gnu awk 4.1.0, there is in-place option, read man/info page.
otherwise, you could do:
awk '..code..' inputfile > tmpfile && mv tmpfile inputfile
note, the cat is not necessary, could (should) be removed.
A little ugly but you can try something like this with sed
sed -i 's/\(\([^|]*|\)\{6\}\)\(.\{6\}\).\{11\}\(.*\)/\1\3XXXXXXXXXXX\4/' file
So with your existing script, it will be -
FILES=/home/auto/*.txt
for f in $FILES
do
echo "Processing $f file..."
sed -i 's/\(\([^|]*|\)\{6\}\)\(.\{6\}\).\{11\}\(.*\)/\1\3XXXXXXXXXXX\4/' "$f"
done
I want to delete all the rows/lines in a file that has a specific character, '?' in my case. I hope there is a single line command in Bash or AWK or Perl. Thanks
You can use sed to modify the file "in-place":
sed -i "/?/d" file
Alternatively, use grep:
grep -v "?" file > newfile.txt
Even better, just a single line using sed
sed '/?/d' input
use -i to edit file in place.
perl -i -ne'/\?/ or print' file
or
perl -i -pe's/^.*?\?.*//s' file
Here are already grep, sed and perl solutions - only for fun, pure bash one:
pattern='?'
while read line
do
[[ "$line" =~ "$pattern" ]] || echo "$line"
done
translated
for every line on the STDIN
match it for the pattern =~
and if the match is not successful || - print out the line
awk '!($0~/?/){print $0}' file_name
My target is to delete line in file only if PATH match the PATH in the file
For example, I need to delete all lines that have /etc/sysconfig PATH from /tmp/file file
more /tmp/file
/etc/sysconfig/network-scripts/ifcfg-lo file1
/etc/sysconfig/network-scripts/ifcfg-lo file2
/etc/sysconfig/network-scripts/ifcfg-lo file3
I write the following Perl code (the perl code integrated in my bash script) in order to delete lines that have "/etc/sysconfig"
export FILE=/etc/sysconfig
perl -i -pe 's/\Q$ENV{FILE}\E// ' /tmp/file
But I get the following after I run the perl code: (in place to get empty lines)
/network-scripts/ifcfg-lo file1
/network-scripts/ifcfg-lo file2
/network-scripts/ifcfg-lo file3
first question:
How to change the perl syntax : perl -i -pe 's/\Q$ENV{FILE }\E// ' in order to delete line that matches the required PATH (/etc/sysconfig)?
second question:
The same as the first question but line will deleted only if PATH match the first field in the file
Example:
/tmp/file before perl edit:
file1 /etc/sysconfig/network-scripts/ifcfg-lo
/etc/sysconfig/network-scripts/ifcfg-lo file2
/etc/sysconfig/network-scripts/ifcfg-lo file3
/tmp/file after perl edit:
file1 /etc/sysconfig/network-scripts/ifcfg-lo
Perl is a fine way to do it. Use the -n switch, not -p.
perl -i -l -n -e'print unless /\Q$ENV{FILE}/' filename
s/pattern/otherpattern/ won't delete entire lines; it will only alter substrings. You need to entirely change your program to delete entire lines. In pseudocode, it would be:
while (read in a line)
{
if (doesn't match)
{
write the line back out unaltered.
}
}
It can still be rewritten as a oneliner though, with knowledge of how continue and redo work in loops: perl -pe '$_ = <> and redo if /Q$ENV{FILE}\E/'
mef#iwlappy:~$ cat /tmp/file
aaaa
/etc/sysconfig/network-scripts/ifcfg-lofile1
/etc/sysconfig/network-scripts/ifcfg-lofile2
/etc/sysconfig/network-scripts/ifcfg-lofile3
aaa
mef#iwlappy:~$ perl -i -pe 's/$ENV{FILE}\E.*//' /tmp/file
mef#iwlappy:~$ cat /tmp/file
aaaa
aaa
You can do a further regexp to remove empty lines with s/^$//
If I were doing this from the command line, I probably wouldn't even use Perl. I'd just use a negated grep:
$ mv old.txt old.bak; grep -v $FILE old.bak > old.txt
Renaming the original file and writing to a new file with the old name is the same thing that perl's -i switch does for you.
If you want to match just the first column, then I might punt to perl so I don't have to use awk or cut. perl's -a switch splits the line on whitespace and puts the results in #F:
$ perl -ai.bak -ne 'print if $F[0] !~ /^\Q$ENV{FILE}/' old.txt
When you think you have it right, you can remove the .bak training wheels that saves a copy of your original file. Or not. I tend to like the safety net.
See perlrun for the details of command-line switches.