Pulling hostname from TNS entry - sed

I am working on a script that will need to determine which node a db being used by a local app is running on. I've been trying to use this as a chance to force myself to learn awk/sed and have a test script to test the statements. It's working off a copy of the tnsnames.ora file I have moved to the home folder the script is located in.
Here is a valid tnsnames.ora stanza:
(
DESCRIPTION = (
ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (Host=iplab)(Port=1521))
)
(CONNECT_DATA=(SID=spurs1))
)
After doing some research and getting the awk expression to pull the tns entry to $host I came up with the below script but it doesn't seem to work.
#!/bin/ksh
db=spurs
host=$(awk -v db=$db "/${db}/ {for(i=1; i<=5; i++) {getline; print}}" tnsnames.ora)
echo $host
host= $host | sed 's/Host\s=\s\([a-z]+[0-9]?\)/\1/'
echo $host
When I run it the awk statement I get the following:
(DESCRIPTION = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (Host=hostname)(Port=1521))) (CONNECT_DATA=(SID=spurs1)) )
./tns.ksh: line 6: (DESCRIPTION: not found
(DESCRIPTION = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (Host=hostname)(Port=1521))) (CONNECT_DATA=(SID=spurs1)) )
From what I have seen reading tutorials and forums I think sed is setup correctly and should be setting $host to one or more lowercase letters followed by 0 or 1 numbers after Host = . Since (DESCRIPTION is the start of $host before and after the sed statement I'm not sure how it isn't finding it, an

This worked for me:
tnsping $db | grep HOST | cut -d\ -f 14 | sed 's/).*//g'

On my system I can use this to get the host as long as the host name doesn't have an equals sign (or the actual literal word HOST in the name of the host):
echo $TNS_STRING | sed 's/.HOST//g' | sed 's/).//g' | sed 's/=//g' | sed 's/\s*//g'

Your value for $host is likely a multiline value, so you need to quote it anyplace you use it, i.e.
host=$(awk -v db=$db "/${db}/ {for(i=1; i<=5; i++) {getline; print}}" tnsnames.ora)
echo "$host"
You also need to capture the output (using command-substitution) via $(...)
host=$(echo "$host" | sed 's/Host\s=\s\([a-z]+[0-9]?\)/\1/')
echo "$host"
(and echo it), so it can be processed by sed
Revise
host=$(echo $host | sed 's/.*Host=//; s/).*$//)
echo "$host"
I've switched back to just $host, without the dbl-quotes, as you don't want the linebreaks in the data. Now it is all one big string, and the regex, strips every upto host=, and then strips everything after the first remaining ) char.
If you still get error messages, I don't have access to a tnsnames.ora record, so please edit your query to include a valid record.
I hope this helps.

you may be better relying on the output of tnsping instead of parsing the file: tnsping appears to emit the description on one line:
host=$(
tnsping $db | while read line; do
if [[ "$line" == *HOST* ]]; then
s=${line#*HOST=}; s=${s%%)*}; echo "$s"; break
fi
done
)

This might work for you:
db=spurs
host=$(sed '/^(/,/^)/!d;/^(/{h;d};H;/^)/!d;g;/'"$db"'/!d;s/.*Host=\([^)]*\).*/\1/' tnsnames.ora)

Tested Code:
OIFS=$IFS;
IFS="(";
tns=`tnsping TNS_ALIAS`
tns_arr=($tns);
tns_info=(`(for ((i=0; i<${#tns_arr[#]}; ++i)); do echo "${tns_arr[$i]/)/}"; done)| grep 'HOST\|PORT'|sed 's/)//g'|sed 's/ //g'`)
for ((i=0; i<${#tns_info[#]}; ++i)); do eval "export ${tns_info[$i]}"; done
echo "host:" $HOST
echo "port:" $PORT
IFS=$OIFS;

Related

How to execute this command in systemd servicefile?

Ok, so I have this command that turns off my touchscreen. It works when I execute it in a root shell.
So this works:
sudo su
/usr/bin/echo $(ls /sys/bus/hid/drivers/hid-multitouch | awk NR==1'{print $1}') > /sys/bus/hid/drivers/hid-multitouch/unbind
And then my touchscreen stops working, which is the result that I wanted.
Now I want to make a touchscreen.service file to execute this on every boot. So in the service file I include:
ExecStart=/usr/bin/echo $(ls /sys/bus/hid/drivers/hid-multitouch | awk NR==1'{print $1}') > /sys/bus/hid/drivers/hid-multitouch/unbind
However it isn't working > nor throwing any errors that I've been able to catch.
I do know from earlier fidlings with .service files that I might actually need to use /usr/bin/sh -c, so I have also tried:
ExecStart=/usr/bin/sh -c "/usr/bin/echo $(ls /sys/bus/hid/drivers/hid-multitouch | awk NR==1'{print $1}') > /sys/bus/hid/drivers/hid-multitouch/unbind"
Yet this also doesn't work.. maybe because of the awk NR==1'{print $1}'part? I have also tried replacing it with awk NR==1'\''{print $1}'\''but again it fails to work.
Does anyone have any ideas on how to get the command that is working in my root cli environment to also work as a systemd service?
To start with,
The syntax of the awk command is just wrong. The quotes are incorrectly placed. The part NR == 1 is part of the awk command to indicate the first line record in the file, i.e.
awk NR==1'{print $1}'
# ^^^^^^^ should be within quotes
awk 'NR == 1 { print $1 }'
Your sequence of echo, ls and the command substitution $(..) doesn't look right. You are effectively echo-ing the literal string /sys/bus/hid/drivers/hid-multitouch (if ls finds the file at that path) over to the pipe and awk just writes that to the /sys/bus/hid/drivers/hid-multitouch/unbind file which might not be your desired action. You just needed to do run the command on the file directly as
awk 'NR == 1 { print $1 }' /sys/bus/hid/drivers/hid-multitouch > /sys/bus/hid/drivers/hid-multitouch/unbind
Now that, that the awk command is fixed, you have two options to run the above command as part of systemd, either put your command in a script or run the command directly. For putting it in a script refer to the Unix.SE answer Where do I put scripts executed by systemd units?. As for running the command directly in ExecStart. Aside from using /bin/sh also use the path /bin/awk
So putting it together and using /bin/ over /usr/bin, you can do below. This command uses ".." over awk script and needs escape of $1
ExecStart=/bin/sh -c '/bin/awk "NR == 1 { print \$1 }" /sys/bus/hid/drivers/hid-multitouch > /sys/bus/hid/drivers/hid-multitouch/unbind'

grep: can't open "command"

I am logging in to the remote machine and executing a command and then grepping the result. I am using Expect module. Here is my sample code.
use Expect;
my $exp=new Expect();
$exp->spawn("ssh $hostname\r");
$exp->expect(5,"*]-> ");
$exp->send("command sent here \r");
$exp->expect(5,"*]-> ");
my $res=$exp->before(); // Here i ll get the command output in a variable. The variable contains TCPIP:1.1.1.1 in one line and UDPIP:1.2.2.2 in another line.
my $id=`grep -i TCPIP $res | cut -d ":" -f2 `;
print " The result is $id \n";
But here i am getting an error
grep: can't open "command sent " .sh: TCPIP not found sh:UDPIP not found.
Maybe you can do even without expect?
my $id = `ssh $hostname <your_command> | grep -i TCPIP | cut -d":" -f2`;
grep takes a filename as paramater not a string. You need something like this:
echo $res | grep -i TCPIP | cut -d ":" -f2 `;
Make sure the command works manually via ssh first though.
But consider using perl itself do do the match and cut rather than spawning out to grep, that would be better.

Dynamically building a exlude list for both rsync & egrep format

I wonder if anyone out there can assist me in trying to solve a issue with me.
I have written a set of shell scripts with the purpose of auditing remote file systems based on a GOLD build on a audit server.
As part of this, I do the following:
1) Use rsync to work out any new files or directories, any modified or removed files
2) Use find ${source_filesystem} -ls on both local & remote to work out permissions differences
Now as part of this there are certain files or directories that I am excluding, i.e. logs, trace files etc.
So in order to achieve this I use 2 methods:
1) RSYNC - I have an exclude-list that is added using --exclude-from flag
2) find -ls - I use a egrep -v statement to exclude the same as the rsync exclude-list:
e.g. find -L ${source_filesystem} -ls | egrep -v "$SEXCLUDE_supt"
So my issue is that I have to maintain 2 separate lists and this is a bit of a admin nightmare.
I am looking for some assistance or some advice on if it is possible to dynamically build a list of exlusions that can be used for both the rsync or the find -ls?
Here is the format of what the exclude lists look like::
RSYNC:
*.log
*.out
*.csv
logs
shared
tracing
jdk*
8.6_Code
rpsupport
dbarchive
inarchive
comms
PR116PICL
**/lost+found*/
dlxwhsr*
regression
tmp
working
investigation
Investigation
dcsserver_weblogic_*.ear
dcswebrdtEAR_weblogic_*.ear
FIND:
SEXCLUDE_supt="\.log|\.out|\.csv|logs|shared|PR116PICL|tracing|lost\+found|jdk|8\.6\_Code|rpsupport|dbarchive|inarchive|comms|dlxwhsr|regression|tmp|working|investigation|Investigation|dcsserver_weblogic_|dcswebrdtEAR_weblogic_"
You don't need to create a second list for your find command. grep can handle a list of patterns using the -f flag. From the manual:
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero
patterns, and therefore matches nothing. (-f is specified by POSIX.)
Here's what I'd do:
find -L ${source_filesystem} -ls | grep -Evf your_rsync_exclude_file_here
This should also work for filenames containing newlines and spaces. Please let me know how it goes.
In the end the grep -Evf was a bit of a nightmare as rsync didnt support regex, it uses regex but not the same.
So I then pursued my other idea of dynamically building the exclude list for egrep by parsing the rsync exclude-list and building variable on the fly to pass into egrep.
This the method I used:
#!/bin/ksh
# Create Signature of current build
AFS=$1
#Create Signature File
crSig()
{
find -L ${SRC} -ls | egrep -v **"$SEXCLUDE"** | awk '{fws = ""; for (i = 11; i <= NF; i++) fws = fws $i " "; print $3, $6, fws}' | sort >${BASE}/${SIFI}.${AFS}
}
#Setup SRC, TRG & SCROOT
LoadAuditReqs()
{
export SRC=`grep ${AFS} ${CONF}/fileSystem.properties | awk {'print $2'}`
export TRG=`grep ${AFS} ${CONF}/fileSystem.properties | awk {'print $3'}`
export SCROOT=`grep ${AFS} ${CONF}/fileSystem.properties | awk {'print $4'}`
**export BEXCLUDE=$(sed -e 's/[*/]//g' -e 's/\([._+-]\)/\\\1/g' ${CONF}/exclude-list.${AFS} | tr "\n" "|")**
**export SEXCLUDE=$(echo ${BEXCLUDE} | sed 's/\(.*\)|/\1/')**
}
#Load Properties File
LoadProperties()
{
. /users/rpapp/rpmonit/audit_tool/conf/environment.properties
}
#Functions
LoadProperties
LoadAuditReqs
crSig
So with these new variables:
**export BEXCLUDE=$(sed -e 's/[*/]//g' -e 's/\([._+-]\)/\\\1/g' ${CONF}/exclude-list.${AFS} | tr "\n" "|")**
**export SEXCLUDE=$(echo ${BEXCLUDE} | sed 's/\(.*\)|/\1/')**
I use them to remove "*" and "/", then match my special characters and prepend with "\" to escape them.
Then it using "tr" replace a newline with "|" and then rerunning that output to remove the trailing "|" to make the variable $SEXCLUDE to use for egrep that is used in the crSig function.
What do you think?

how to trim wipe output with sed?

i want to trim an output of wipe command with sed.
i try to use this one:
wipe -vx7 /dev/sdb 2>&1 | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
but it don't work for some reason.
when i use echo & sed to print the output of wipe command it works!
echo "/dev/sdb: 10%" | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
what i'm doing wrong?
Thanks!
That looks like a progress indicator. They are often output directly to the tty instead of to stdout or stderr. You may be able to use the expect script called unbuffer (source) or some other method to create a pseudo tty. Be aware that there will probably be more junk such as \r, etc., that you may need to filter out.
Demonstration:
$ cat foo
#!/bin/sh
echo hello > /dev/tty
$ a=$(./foo)
hello
$ echo $a
$ a=$(unbuffer ./foo)
$ echo $a
hello

How do I push `sed` matches to the shell call in the replacement pattern?

I need to replace several URLs in a text file with some content dependent on the URL itself. Let's say for simplicity it's the first line of the document at the URL.
What I'm trying is this:
sed "s/^URL=\(.*\)/TITLE=$(curl -s \1 | head -n 1)/" file.txt
This doesn't work, since \1 is not set. However, the shell is getting called. Can I somehow push the sed match variables to that subprocess?
The accept answer is just plain wrong. Proof:
Make an executable script foo.sh:
#! /bin/bash
echo $* 1>&2
Now run it:
$ echo foo | sed -e "s/\\(foo\\)/$(./foo.sh \\1)/"
\1
$
The $(...) is expanded before sed is run.
So you are trying to call an external command from inside the replacement pattern of a sed substitution. I dont' think it can be done, the $... inside a pattern just allows you to use an already existent (constant) shell variable.
I'd go with Perl, see the /e option in the search-replace operator (s/.../.../e).
UPDATE: I was wrong, sed plays nicely with the shell, and it allows you do to that. But, then, the backlash in \1 should be escaped. Try instead:
sed "s/^URL=\(.*\)/TITLE=$(curl -s \\1 | head -n 1)/" file.txt
Try this:
sed "s/^URL=\(.*\)/\1/" file.txt | while read url; do sed "s#URL=\($url\)#TITLE=$(curl -s $url | head -n 1)#" file.txt; done
If there are duplicate URLs in the original file, then there will be n^2 of them in the output. The # as a delimiter depends on the URLs not including that character.
Late reply, but making sure people don't get thrown off by the answers here -- this can be done in gnu sed using the e command. The following, for example, decrements a number at the beginning of a line:
echo "444 foo" | sed "s/\([0-9]*\)\(.*\)/expr \1 - 1 | tr -d '\n'; echo \"\2\";/e"
will produce:
443 foo