I'd like to use Sed to expand variables inside a file.
Suppose I exported a variable VARIABLE=something, and have a "test" file with the following:
I'd like to expand this: "${VARIABLE}"
I've been trying commands like the following, but to no avail:
cat test | sed -e "s/\(\${[A-Z]*}\)/`eval "echo '\1'"`/" > outputfile
The result is the "outputfile" with the variable still not expanded:
I'd like to expand this: "${VARIABLE}"
Still, running eval "echo '${VARIABLE}' in bash console results in the value "something" being echoed. Also, I tested and that pattern is trully being matched.
The desired output would be
I'd like to expand this: "something"
Can anyone shed a light on this?
Consider your trial version:
cat test | sed -e "s/\(\${[A-Z]*}\)/`eval "echo '\1'"`/" > outputfile
The reason this doesn't work is because it requires prescience on the part of the shell. The sed script is generated before any pattern is matched by sed, so the shell cannot do that job for you.
I've done this a couple of ways in the past. Normally, I've had a list of known variables and their values, and I've done the substitution from that list:
for var in PATH VARIABLE USERNAME
do
echo 's%${'"$var"'}%'$(eval echo "\$$var")'%g'
done > sed.script
cat test | sed -f sed.script > outputfile
If you want to map variables arbitrarily, then you either need to deal with the whole environment (instead of the fixed list of variable names, use the output from env, appropriately edited), or use Perl or Python instead.
Note that if the value of an environment variable contains a slash in your version, you'd run into problems using the slash as the field separator in the s/// notation. I used the '%' since relatively few environment variables use that - but there are some found on some machines that do contain '%' characters and so a complete solution is trickier. You also need to worry about backslashes in the value. You probably have to use something like '$(eval echo "\$$var" | sed 's/[\%]/\\&/g')' to escape the backslashes and percent symbols in the value of the environment variable. Final wrinkle: some versions of sed have (or had) a limited capacity for the script size - older versions of HP-UX had a limit of about 100. I'm not sure whether that is still an issue, but it was as recently as 5 years ago.
The simple-minded adaptation of the original script reads:
env |
sed 's/=.*//' |
while read var
do
echo 's%${'"$var"'}%'$(eval echo "\$$var" | sed 's/[\%]/\\&/g')'%g'
done > sed.script
cat test | sed -f sed.script > outputfile
However, a better solution uses the fact that you already have the values in the output from env, so we can write:
env |
sed 's/[\%]/\\&/g;s/\([^=]*\)=\(.*\)/s%${\1}%\2%/' > sed.script
cat test | sed -f sed.script > outputfile
This is altogether safer because the shell never evaluates anything that should not be evaluated - you have to be so careful with shell metacharacters in variable values. This version can only possibly run into any trouble if some output from env is malformed, I think.
Beware - writing sed scripts with sed is an esoteric occupation, but one that illustrates the power of good tools.
All these examples are remiss in not cleaning up the temporary file(s).
Maybe you can get by without using sed:
$ echo $VARIABLE
something
$ cat test
I'd like to expand this: ${VARIABLE}
$ eval "echo \"`cat test`\"" > outputfile
$ cat outputfile
I'd like to expand this: something
Let shell variable interpolation do the work.
Related
I have a string like below
abc="where session = '001122' and indicator = 'X'"
I want to convert it to
eng="where session in ('001122') and indicator in ('X')"
I have tried like below using sed in bash
eng=$(echo $abc | sed -r "s/=\s+('[^']+')/in (\1)/g")
I am still get the input itself. What am I doing wrong.
You can use unadorned sed with escaped to escape the capture group parentheses (\( and \)), as well as one-or-more quantifiers (\+):
$ eng=$(echo "$abc" | sed "s/=\s\+'\([^']\+\)'/in ('\1')/g"
$ echo "$eng"
where session in ('001122') and indicator in ('X')
It is also probably a good idea to quote your expansion of abc, since it has spaces in it, but not strictly necessary in this context.
Your original code may not have worked because -r is a GNU extension. The synonym -E used to be as well, but is now part of the POSIX standard, and should therefore be relatively portable. The following version should therefore have no problems either:
$ eng=$(echo "$abc" | sed -E "s/=\s+'([^']+)'/in ('\1')/g"
What is the syntax to pass a variable to sed command that updates the second column in a CSV file. The variable name is $tag
This is the command I have used but I don't know where to put the variable exactly.
basename "$dec" | sed 's/.*/&,A/' >> home/kelsabry/Downloads/Tests/results.csv
where $decis variable that returns to me a certain directory.
Output:
Downloads, A
Documents, A
etc.
My command to pass the variable into sed to update the second column was:
basename "$dec" | sed 's/.*/&,'$tag'/' >> home/kelsabry/Downloads/Tests/results.csv
but it gave me this output:
Downloads, '$tag'
Documents, '$tag'
etc.
So, where should I write the variable $tag in sed command?
Unfortunately, sed is neither aware of fields nor capable of accepting variables. For that, you'd use shell or awk or shell or some other language.
sed is a Stream EDitor and in your example is taking input from stdin, not a variable.
If you do want to embed shell variables inside a sed script, understand that you are basically creating your sed script on-the-fly, and it's important to make sure you do it safely.
For example, if there's the possibility that your $tag variable might contain something that will cause misinterpretation of the sed script (i.e. perhaps it came from user input),
you need protection. In POSIX shell, perhaps something like this:
if [ "$tag" != "${tag#*[!A-Z]}" ]; then
printf 'ERROR: invalid tag\n' >&2
exit 1
fi
or even:
case "$tag" in
[A-Z]) : ;;
*) printf 'ERROR: invalid tag\n' >&2; exit 1 ;;
esac
then
# Note the alternative to `basename`
echo "${dec##*/}" | sed 's/$/,'"$tag"'/' >> path/to/file.csv
Note that sed doesn't know anything about fields or CSV. sed is simply being used to append a string on to the end of the line.
Of course, in csh (which perhaps shouldn't be used for scripted automation), you are missing the more useful parameter expansion tools, but you can still protect yourself in other ways:
if ( $%tag == 1 ) then
switch ($tag)
case [A-Z]:
printf '%s,%s\n' `basename "$dec"` "$tag"
breaksw
default:
printf 'ERROR: invalid tag\n'
exit 1
breaksw
endsw
else
printf 'ERROR: invalid tag\n'
exit 1
endif
(Note: this is untested. Mileage varies based on multiple conditions. May contain nuts.)
The issue you listed in your question was a quoting problem. You said: sed 's/.*/&,'$tag'/' >.
An alternative might be to use awk:
echo "${dec##*/}" | awk -v tag="$tag" '{print $0 OFS tag}' OFS=, >> path/to/file.csv
Awk is a more complete programming language, and supports named variables, unlike sed. The -v option allows you to pre-load an awk variable with the contents of a shell variable.
CSH is considered harmful by some. I'd recommend doing this in a POSIX shell instead, if only to take advantage of the much larger pool of experts who can help with your scripting questions. :)
I have a bash script in which I have a few qsubs. Each of them are waiting for a preivous qsub to be done before starting.
My first qsub consist of sending files in a certain directory to a perl program and having the outfiles printed in a new directory. At the end, I echo the array with all my jobs names. This script works as intented.
mkdir -p /perl_files_dir
for ID_FILES in `ls Infiles_dir/*.txt`;
do
JOB_ID=`echo "perl perl_scirpt.pl $ID_FILES" | qsub -j oe `
JOB_ID_ARRAY="${JOB_ID_ARRAY}:$JOB_ID"
done
echo $JOB_ID_ARRAY
My second qsub is meant to sort all my previous files made with my perl script in a new outfile and to start after all these jobs are done (about 100 jobs) with depend=afterany. Again, this part is working fine.
SORT_JOB=`echo "sort -m -n perl_files_dir/*.txt >>sorted_file.txt" | qsub -j oe -W depend=afterany$JOB_ID_ARRAY`
SORT_ARRAY="${SORT_ARRAY}:$SORT_JOB"
My issue is that in my sorted file, I have a few columns I wish to remove (2 to 6), so I came up with this last line using awk piped to sed with another depend=afterany
SED=`echo "awk '{\$2="";\$3="";\$4="";\$5="";\$6=""; print \$0}' sorted_file.txt \
| sed 's/ //g' >final_file.txt" | qsub -j oe -W depend=afterany$SORT_ARRAY`
This last step creates final_file.txt, but leaves it empty. I added SED= before my echo because it would otherwise give me Command not found.
I tried without the pipe so it would just print everything. Unfortunately it prints nothing.
I assume it is not opening my sorted file and this is why my final file is empty after my sed. If it's the case, then why won't awk read it?
In my script, I am using variables to define my directories and files (with the correct path). I know my issue is not about find my files or directories since they are perfectly defined at the beginning and used throughout the script. I tried to write the whole path instead of a variable and I get the same results.
for ID_FILES in `ls Infiles_dir/*.txt`
Simplify this to
for ID_FILES in Infiles_dir/*.txt
ls lists the files you pass it (except when you pass it directories, then it lists their content). Rather than telling it to display a list of files and parse the output, use the list of files you already have! This is more reliable (parsing the output of ls will fail if the file names contain whitespace or wildcard characters), clearer and faster. Don't parse the output of ls.
SORT_JOB=`echo "sort -m -n perl_files_dir/*.txt >>sorted_file.txt" | qsub -j oe -W depend=afterany$JOB_ID_ARRAY`
You'd make your life simpler if you used the right form of quoting in the right place. Don't use backquotes, because it's difficult to know how to quote things inside. Use $(…) instead, it's exactly equivalent except that it is parsed in a sane way.
I recommend using a here document for the shell snippet that you're feeding to qsub. You have fewer quoting issues to worry about, and it's more readable.
While we're at it, always put double quotes around variable substitutions and command substitutions: "$some_variable", "$(some_command)". Annoyingly, $var in shell syntax doesn't mean “take the value of the variable var”, it means “take the value of the variable var, parse it as a list of wildcard patterns, and replace each pattern by the list of matching files if there are matching files”. This extra stuff is turned off if the substitution happens inside double quotes (or in a here document, by the way): "$var" means “take the value of the variable var”.
SORT_JOB=$(qsub -j oe -W depend="afterany$JOB_ID_ARRAY" <<'EOF'
sort -m -n perl_files_dir/*.txt >>sorted_file.txt
EOF
)
We now get to the snippet where the quoting was actually causing a problem.
SED=`echo "awk '{\$2="";\$3="";\$4="";\$5="";\$6=""; print \$0}' sorted_file.txt \
| sed 's/ //g' >final_file.txt" | qsub -j oe -W depend=afterany$SORT_ARRAY`
The string that becomes the argument to the echo command is:
awk '{$2=;$3=;$4=;$5=;$6=; print $0}' sorted_file.txt | sed 's/ //g' >final_file.txt
This is syntactically incorrect, and that's why you're not getting any output.
You didn't escape the double quotes inside what was meant to be the awk snippet. It's a lot clearer if you use a here document. Also, you don't need the SED= part. You added it because you had a command substitution (a command between …), which substitutes the output of a command. But since you aren't interested in the output of the qsub command, don't take its output, just execute it.
qsub -j oe -W depend="afterany$SORT_ARRAY" <<'EOF'
awk '{$2="";$3="";$4="";$5="";$6=""; print $0}' sorted_file.txt |
sed 's/ //g' >final_file.txt
EOF
I'm not familiar with qsub, but presumably there's a way to get the error output and the return status of the commands it runs. Inspect that error output, you should have seen the errors from awk.
The version of awk that I am using, does not like the character escapes
awk --version
GNU Awk 3.1.7
spuder#cent64$ awk '{\$2="";\$3="";\$4=""; print \$0}' foo.txt
awk: {\$2="";\$3="";\$4=""; print \$0}
awk: ^ backslash not last character on line
Try the following syntax
awk '{for(i=2;i<=7;i++) $i="";print}' foo.txt
As a side note, if you are using Torque 4.x you may not be able to use a comma separated list of jobs with -W depend=, instead you may need to create a new PBS declarative (-W) for each job.
eg...
#Invalid syntax in newer versions of torque
qsub -W depend=foo,bar
Resources
backslash in gawk fields
Print all but the first three columns
http://docs.adaptivecomputing.com/torque/help.htm#topics/commands/qsub.htm#-W
This question already has answers here:
sed substitution with Bash variables
(6 answers)
Closed 24 days ago.
The sed command works as expected at the command prompt, but does not work in a shell script.
new_db_name=`echo "$new_db_name" | sed 's/$replace_string/$replace_with/'`
Why is that, and how can I fix it?
Use double quotes for the sed expression.
new_db_name=$(echo "$new_db_name" | sed "s/$replace_string/$replace_with/")
If you use bash, this should work:
new_db_name=${new_db_name/$replace_string/$replace_with}
This worked for me in using env arguments.
export a=foo
export b=bar
echo a/b | sed 's/a/'$b'/'
bar/b
Guys: I used the following to pass bash variables to a function in a bash script using sed. I.e., I passed bash variables to a sed command.
#!/bin/bash
function solveOffendingKey(){
echo "We will delete the offending key in file: $2, in line: $1"
sleep 5
eval "sed -i '$1d' $2"
}
line='4'
file=~/ivan/known_hosts
solveOffendingKey $number $file
Kind regards!
depending on how your variables are initialized, you are better off using brackets:
new_db_name=`echo "$new_db_name" | sed "s/${replace_string}`/${replace_with}/"
Maybe I'm missing something but new_db_name=echo "$new_db_name" doesn't make sense here. $new_db_name is empty so you're echoing a null result, and then the output of the sed command. To capture stdout as a variable, backticks aren't recommended anymore. Capture output surrounded by $().
new_db_name=$(sed "s/${replace_string}/${replace_with}/")
Take the following example:
replace_string="replace_me"
replace_with=$(cat replace_file.txt | grep "replacement_line:" | awk FS" '{print $1}')
Where replace_file.txt could look something like:
old_string: something_old
I like cats
replacement_line: "shiny_new_db"
Just having the variable in the sed expresion $replace_with won't work. bash doesn't have enough context to escape the variable expression. ${replace_with} tells bash to explicitly use the contents of the command issued by the variable.
I have a file named check.txt which has the below contents:
$ cat check.txt
~/bin/tibemsadmin -server $URL-user $USER -password $PASWRD
$
I have a main script where the values of $URL, $USER, $PASWRD are obtained from the main script. I want to use the SED utility to replace the $URL, $USER, $PASWRD to the actual values in the check.txt.
I am trying like this but it fails.
emsurl=tcp://myserver:3243
emsuser=test
emspasswd=new
sed s/$URL/${emsurl}/g check.txt >> check_new.txt
sed s/$USER/${emsuser}/g check.txt_new.txt >> check_new_1.txt
sed s/PASWRD/${emspasswd}/g check_new_1.txt >> final.txt
My final.txt output is desired as below:
~/bin/tibemsadmin -server tcp://myserver:3243 -user test -password new
Could you please help me?
You have to be rather careful with your use of quotes. You also need to learn how to do multiple operations in a single pass, and/or how to use pipes.
emsurl=tcp://myserver:3243
emsuser=test
emspasswd=new
sed -e "s%\$URL%${emsurl}%g" \
-e "s%\$USER%${emsuser}%g" \
-e "s%\$PASWRD%${emspasswd}%g" check.txt >final.txt
Your problem is that the shell expanded the '$URL' in your command line (probably to nothing), meaning that sed got to see something other than what you intended. By escaping the $ with the \, sed gets to see what you intended.
Note that I initially used / as the separator in the substitute operations; however, as DarkDust rightly points out, that won't work since there are slashes in the URLs. My normal fallback character is % - as now shown - but that can appear in some URLs and might not be appropriate. I'd probably use a control character, such as control-A, if I needed to worry about that - or I'd use Perl which would be able to play without getting confused.
You can also combine the three separate -e expressions into one with semi-colons replacing them. However, I prefer the clarity of the three operations clearly separated.
You could take a slightly different approach by modifying your main script as follows :-
export URL="tcp://myserver:3243"
export USER=test
export PASWRD=new
. ./check.txt
This sets up the variables and then runs check.txt within the context of your main script
Although you don't say what's failing I guess I see the problems.
I suggest you do this:
sed "s|\$URL|${emsurl}|g"
That is, the first $ needs to be escaped because you want it literally. Then, instead of / I suggest you use | (pipe) as delimiter since it's not used in your strings. Finally, use " to ensure the content is interpreted as string by the shell.
You can then pipe everything together to not need any temporary files:
sed "s|\$URL|${emsurl}|g" | sed "s|\$USER|${emsuser}|g" | sed "s|\$PASSWRD|${emspasswd}|g"
Variable substitution should be outside sed expression and '$' should be escaped; in your case something like this:
sed -e 's/\$URL/'$emsurl'/g' -e 's/\$USER/'$emsuser'/g' -e 's/\$PASSWORD/'$emaspasswd'/g'
Anyway in your place I would avoid using $ to match placeholders in a template file, because it's causing confusion with BASH variables, use a different pattern instead (for instance #URL#).