I'm working with the Google Healthcare API and there's a step in the walk through that uses netcat to send an HL7 message to the MLLP adapter.
(I used nmap to download ncat for Windows)
I have the adapter running locally but the command they provide is written for Mac/Nix users and I'm on Windows.
echo -n -e "\x0b$(cat hl7.txt)\x1c\x0d" | nc -q1 localhost 2575 | less
So I tried rewriting this for windows powershell:
$hl7 = type hl7.txt
Write-Output "-n -e \x0b" $hl7 "\x1c\x0d" | ncat -q1 localhost 2575 | less
When I try this, I get an error that "less" is invalid and also -q1 is also an invalid command.
If I remove -q1 and | less the command executes with no output or error message.
I'm wondering if I'm using ncat incorrectly here or the write-output incorrectly?
What is the -q1 parameter?
It doesn't seem to be a valid ncat parameter from what I've researched.
I've been following this walkthrough:
https://cloud.google.com/healthcare/docs/how-tos/mllp-adapter#connection_refused_error_when_running_locally
We're really converting the echo command, not the ncat command. The syntax for ascii codes is different in powershell.
[char]0x0b + (get-content hl7.txt) + [char]0x1c + [char]0x0d |
ncat -q1 localhost 2575
in ascii: 0b vertical tab, 1c file seperator, 0d carriage return http://www.asciitable.com
Or this. `v is 0b and `r is 0d
"`v$(get-content hl7.txt)`u{1c}`r" | ncat -q1 localhost 2575
If you want it this way, it's the same thing. All three ways end up being the same.
"`u{0b}$(get-content hl7.txt)`u{1c}`u{0d}" | ncat -q1 localhost 2575
The best I can explain is by example.
Create named pipe: mkfifo pipe
Create 5 text files, a.txt, b.txt, c.txt, d.txt, e.txt (they can hold any contents for this example)
cat [a-e].txt > pipe
Of course, because the pipe is not open at the consumer side, the terminal will seem to be busy.
In another terminal, tail -fn +1 pipe
All content is fed through the pipe (consumed and printed out by tail) as expected.
But instead of simply printing out content consumed, I would like each piped text file to be redirected to a command (5 separate processes) that can only handle one at a time:
Something like python some-script.py < pipe but where it would create 5 different instances (one instance per text file content).
Is there any way for the consumer to differentiate between objects coming in? Or does the data get appended and read all as one stream?
A potential solution that might be generally applicable (looking forward to hearing if there are more efficient alternatives.
First, an example python script that the question describes:
some-script.py:
import sys
lines = sys.stdin.readlines()
print('>>>START-OF-STDIN<<<')
print(''.join(lines))
print('>>>END-OF-STDIN<<<')
The goal is for the stream of text coming from the pipe to be differentiable.
An example of the producers:
cat a.txt | echo $(base64 -w 0) | cat > pipe &
cat b.txt | echo $(base64 -w 0) | cat > pipe &
cat c.txt | echo $(base64 -w 0) | cat > pipe &
cat d.txt | echo $(base64 -w 0) | cat > pipe &
cat e.txt | echo $(base64 -w 0) | cat > pipe &
A description of the producers:
cat concatenates entire file and then pipes to echo
echo displays text coming from sub-command $(base64 -w 0) and pipes to cat
base64 -w 0 encodes full file contents into a single line
cat used in this case concatenates the full line before redirecting output to pipe. Without it, the consumer doesn't work properly (try for yourself)
An example of the consumer:
tail -fn +1 pipe | while read line ; do (echo $line | base64 -d | cat | python some-script.py) ; done
A description of the consumer:
tail -fn +1 pipe follows (-f) pipe from the beginning (-n +1) without exiting process and pipes content to read within a while loop
while there are lines to be read (assuming base64 encoded single lines coming from producers), each line is passed to a sub-shell
In each subshell
echo pipes the line to base64 -d (-d stands for decode)
base64 -d pipes the decoded line (which now spans multiple lines potentially) to cat
cat concatenates the lines and pipes it as one to python some-script.py
Finally, the example python script is able to read line by line in exactly the same way as cat example.txt | python some-script.py
The above was useful to me when a host process did not have Docker permissions but could pipe to a FIFO (named pipe) file mounted in as a volume to a container. Potentially multiple instances of the consumer could happen in parallel. I think the above successfully differentiates content coming in so that the isolated process can process content coming in from named pipe.
An example of the Docker command involving pipe symbols, etc:
"bash -c 'tail -fn +1 pipe | while read line ; do (echo $line | base64 -d | cat | python some-script.py) ; done'"
I am converting binary data into hex and viewing this hex data in head from a continuous stream.
I run the following where the conversion is from here
echo "ibase=2;obase=10000;$(echo `sed '1q;d' /Users/masi/Dropbox/123/r3.raw`)" \
\
| bc \
\
| head
and I get
(standard_in) 1: illegal character: H
so wrong datatype.
How can you do the conversion form binary to binary ascii by a single command efficietly?
I run the following code based on Wintermute's comment
hexdump -e '/4 "%08x\n"' r3.raw
For instance, head r3.raw | hexdump -e '/4 "%08x\n" gives
ffffffff
555eea57
...
I have some data from an Nmap Scan. It looks like this.
Nmap scan report for 10.16.17.34
Host is up (0.011s latency).
Not shown: 65530 closed ports
PORT STATE SERVICE
22/tcp open ssh
23/tcp open telnet
80/tcp open http
| http-headers:
| Date: THU, 30 AUG 2012 22:46:11 GMT
| Expires: THU, 30 AUG 2012 22:46:11 GMT
| Content-type: text/html
|
|_ (Request type: GET)
443/tcp open https
| ssl-enum-ciphers:
| SSLv3
| Ciphers (11)
| TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA - unknown strength
| TLS_RSA_EXPORT1024_WITH_RC4_56_SHA - unknown strength
| TLS_RSA_EXPORT_WITH_DES40_CBC_SHA - unknown strength
| TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 - unknown strength
| TLS_RSA_EXPORT_WITH_RC4_40_MD5 - unknown strength
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - unknown strength
| TLS_RSA_WITH_DES_CBC_SHA - unknown strength
| TLS_RSA_WITH_RC4_128_MD5 - unknown strength
| TLS_RSA_WITH_RC4_128_SHA - strong
| TLSv1.0
| Ciphers (10)
| TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA - unknown strength
| TLS_RSA_EXPORT1024_WITH_RC4_56_SHA - unknown strength
| TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 - unknown strength
| TLS_RSA_EXPORT_WITH_RC4_40_MD5 - unknown strength
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - unknown strength
| TLS_RSA_WITH_DES_CBC_SHA - unknown strength
| TLS_RSA_WITH_RC4_128_MD5 - unknown strength
| TLS_RSA_WITH_RC4_128_SHA - strong
| Compressors (1)
| NULL
|_ Least strength = unknown strength
2023/tcp open xinuexpansion3
Nmap scan report for 10.16.40.0
Host is up (0.00062s latency).
All 65535 scanned ports on 10.16.40.0 are closed
Nmap scan report for 10.16.40.1
Host is up (0.00071s latency).
All 65535 scanned ports on 10.16.40.1 are closed
What I am attempting to do is to either use Awk, Sed or Grep or something else to extract any section that starts with Nmap Scan and ends in a blank new line and has ssl-enum-ciphers in it. I figured out with Awk how to print each section but I can't get it to check for the ssl line. I'm out of my league with this.
Thanks
What you have is blank-line separated records. You can use awk to check for your ssl-enum-ciphers:
awk -v RS='' '/ssl-enum-ciphers/' file.txt
This will check that the record doesn't contain the phrase 'host down':
awk -v RS='' '/ssl-enum-ciphers/ && !/host down/' file.txt
You could make this more stringent by changing the field separator to a newline character:
awk 'BEGIN { RS=""; FS="\n" } /ssl-enum-ciphers/ && $1 !~ /host down/' file.txt
Add some newlines between records:
awk 'BEGIN { RS=""; FS="\n" } /ssl-enum-ciphers/ && $1 !~ /host down/ { printf "%s\n\n", $0 }' file.txt
Processing Nmap text output is tricky and fraught with dangers, since it can change from version to version. For parsing Nmap output, use the XML output with the -oX or -oA arguments. Then use an XML parsing library or utility to extract the information you need.
For your example, use xmlstarlet to extract the host element that contains a script element with the id attribute set to "ssl-enum-ciphers". This example will output the IP address of the target, followed by the output from the ssl-enum-ciphers script:
xmlstarlet sel -t -m '//script[#id="ssl-enum-ciphers"]' \
-v '../../../address[#addrtype="ipv4"]/#addr' -v '#output' output.xml
In the next release of Nmap, script output itself will be further broken into XML structures, making it easier to do things like output a list of only the weak ciphers in use.
Some commands in Solaris (such as iostat) report disk related information using disk names such as sd0 or sdd2. Is there a consistent way to map these names back to the standard /dev/dsk/c?t?d?s? disk names in Solaris?
Edit: As Amit points out, iostat -n produces device names such as eg c0t0d0s0 instead of sd0. But how do I found out that sd0 actually is c0t0d0s0? I'm looking for something that produces a list like this:
sd0=/dev/dsk/c0t0d0s0
...
sdd2=/dev/dsk/c1t0d0s4
...
Maybe I could run iostat twice (with and without -n) and then join up the results and hope that the number of lines and device sorting produced by iostat is identical between the two runs?
Following Amit's idea to answer my own question, this is what I have come up with:
iostat -x|tail -n +3|awk '{print $1}'>/tmp/f0.txt.$$
iostat -nx|tail -n +3|awk '{print "/dev/dsk/"$11}'>/tmp/f1.txt.$$
paste -d= /tmp/f[01].txt.$$
rm /tmp/f[01].txt.$$
Running this on a Solaris 10 server gives the following output:
sd0=/dev/dsk/c0t0d0
sd1=/dev/dsk/c0t1d0
sd4=/dev/dsk/c0t4d0
sd6=/dev/dsk/c0t6d0
sd15=/dev/dsk/c1t0d0
sd16=/dev/dsk/c1t1d0
sd21=/dev/dsk/c1t6d0
ssd0=/dev/dsk/c2t1d0
ssd1=/dev/dsk/c3t5d0
ssd3=/dev/dsk/c3t6d0
ssd4=/dev/dsk/c3t22d0
ssd5=/dev/dsk/c3t20d0
ssd7=/dev/dsk/c3t21d0
ssd8=/dev/dsk/c3t2d0
ssd18=/dev/dsk/c3t3d0
ssd19=/dev/dsk/c3t4d0
ssd28=/dev/dsk/c3t0d0
ssd29=/dev/dsk/c3t18d0
ssd30=/dev/dsk/c3t17d0
ssd32=/dev/dsk/c3t16d0
ssd33=/dev/dsk/c3t19d0
ssd34=/dev/dsk/c3t1d0
The solution is not very elegant (it's not a one-liner), but it seems to work.
One liner version of the accepted answer (I only have 1 reputation so I can't post a comment):
paste -d= <(iostat -x | awk '{print $1}') <(iostat -xn | awk '{print $NF}') | tail -n +3
Try using the '-n' switch. For eg. 'iostat -n'
As pointed out in other answers, you can map the device name back to the instance name via the device path and information contained in /etc/path_to_inst. Here is a Perl script that will accomplish the task:
#!/usr/bin/env perl
use strict;
my #path_to_inst = qx#cat /etc/path_to_inst#;
map {s/"//g} #path_to_inst;
my ($device, $path, #instances);
for my $line (qx#ls -l /dev/dsk/*s2#) {
($device, $path) = (split(/\s+/, $line))[-3, -1];
$path =~ s#.*/devices(.*):c#$1#;
#instances =
map {join("", (split /\s+/)[-1, -2])}
grep {/$path/} #path_to_inst;
*emphasized text*
for my $instance (#instances) {
print "$device $instance\n";
}
}
I found the following in the Solaris Transistion Guide:
"Instance Names
Instance names refer to the nth device in the system (for example, sd20).
Instance names are occasionally reported in driver error messages. You can determine the binding of an instance name to a physical name by looking at dmesg(1M) output, as in the following example.
sd9 at esp2: target 1 lun 1
sd9 is /sbus#1,f8000000/esp#0,800000/sd#1,0
<SUN0424 cyl 1151 alt 2 hd 9 sec 80>
Once the instance name has been assigned to a device, it remains bound to that device.
Instance numbers are encoded in a device's minor number. To keep instance numbers consistent across reboots, the system records them in the /etc/path_to_inst file. This file is read only at boot time, and is currently updated by the add_drv(1M) and drvconf"
So based upon that, I wrote the following script:
for device in /dev/dsk/*s2
do
dpath="$(ls -l $device | nawk '{print $11}')"
dpath="${dpath#*devices/}"
dpath="${dpath%:*}"
iname="$(nawk -v dpath=$dpath '{
if ($0 ~ dpath) {
gsub("\"", "", $3)
print $3 $2
}
}' /etc/path_to_inst)"
echo "$(basename ${device}) = ${iname}"
done
By reading the information directly out of the path_to_inst file, we are allowing for adding and deleting devices, which will skew the instance numbers if you simply count the instances in the /devices directory tree.
I think simplest way to find descriptive name having instance name is:
# iostat -xn sd0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
4.9 0.2 312.1 1.9 0.0 0.0 3.3 3.5 0 1 c1t1d0
#
The last column shows descriptive name for provided instance name.
sd0 sdd0 are instance names of devices.. you can check /etc/path_to_inst to get instance name mapping to physical device name, then check link in /dev/dsk (to which physical device it is pointing) it is 100% sure method, though i dont know how to code it ;)
I found this snippet on the internet some time ago, and it does the trick. This was on Solaris 8:
#!/bin/sh
cd /dev/rdsk
/usr/bin/ls -l *s0 | tee /tmp/d1c |awk '{print "/usr/bin/ls -l "$11}' | \
sh | awk '{print "sd" substr($0,38,4)/8}' >/tmp/d1d
awk '{print substr($9,1,6)}' /tmp/d1c |paste - /tmp/d1d
rm /tmp/d1[cd]
A slight variation to allow for disk names that are longer than 8 characters (encountered when dealing with disk arrays on a SAN)
#!/bin/sh
cd /dev/rdsk
/usr/bin/ls -l *s0 | tee /tmp/d1c | awk '{print "/usr/bin/ls -l "$11}' | \
sh | awk '{print "sd" substr($0,38,4)/8}' >/tmp/d1d
awk '{print substr($9,1,index($9,"s0)-1)}' /tmp/d1c | paste - /tmp/d1d
rm /tmp/d1[cd]