Centreon check_snmp returns raw data - plugins

I am supervising a NTP server using Centreon. I am trying to get the System Date through SNMP. Using snmpwalk I identified the correct OID which is HOST-RESOURCES-MIB::hrSystemDate.0 (or .1.3.6.1.2.1.25.1.2.0).
Using snmpget with the numeric OID, I get back the correct value, like this:
HOST-RESOURCES-MIB::hrSystemDate.0 = STRING: 2017-1-19,9:51:25.0,+0:0
Now, back to Centreon. I use the check_snmp plugin with the following command:
./check_snmp -H xx.xx.xx.xx -C xxxxxx -o .1.3.6.1.2.1.25.1.2.0 -l 'System Date'
The problem is the value returned is in raw form:
SNMP OK - System Date 07 E1 01 13 09 35 01 00 2B 00 00 | 'System Date'=07
I updated nagios-plugins, I tried all the options available, but I cannot get the plugins to return the same thing as my snmpget result.
Any idea ?

I'm not exactly sure why, but you certainly are correct!
I got around the issue during replicating of the issue by simply using HOST-RESOURCES-MIB::hrSystemDate.0 as the oid in check_snmp, like this:
[nagios#nagios libexec]# ./check_snmp -H hh -C cc -o HOST-RESOURCES-MIB::hrSystemDate.0
SNMP OK - 2017-1-19,9:28:45.0,-6:0 | HOST-RESOURCES-MIB::hrSystemDate.0=2017
Where hh and cc are hostname and community string, respectively.
Hope this helps!

Related

Linux SED replace HEX in file instead of insert? \x2A

I have these bytes:
E6 2A 1B EF 11 00 00 00 00 00 00 4E 43 DB E8
I need to replace them with these:
64 08 1A EF 11 00 00 00 00 00 DA D8 26 04
When I started to experiment, I've noticed one strange thing.
sed -e 's/\xE6/\x64/g'
This code replaces first E6 with 64 ok
However when I try to change more bytes (2A) causing problem.
sed -e 's/\xE6\x2A/\x64\x08/g'
as I understand 2A inserts same code.. How to avoid it? I just need to change 2A with 08. Thanks in advance :)
UPDATED
Now I've stuck on \x26. sed -e 's/\xDB/\x26/g' this code refuses to replace DB to 26, but when I run s/\xDB/\xFF it works. Any ideas? In this way something is wrong with 26. I have tried [\x26], not helped here.
OK. s/\xDB/\&/g' seems to be working :)
\x2a is *, which is special in regex :
$ sed 's/a*/b/' <<<'aaa'
b
$ sed 's/a\x2a/b/' <<<'aaa'
b
You may use bracket expression in regex to cancel the special properties of characters, but I see it doesn't work well with all characters with my GNU sed:
$ sed 's/\xE6[\x2A]/OK/' <<<$'I am \xE6\x2A'
I am OK
$ sed 's/[\xE6][\x2A]/OK/' <<<$'I am \xE6\x2A'
I am �*
That's because \xE6] is probably some invalid UTF character. Remember to use C locale:
$ LC_ALL=C sed 's/[\xE6][\x2A]/OK/' <<<$'I am \xE6\x2A'
I am OK
Remember that \1 \2 etc. and & are special in replacement part too. Read Escape a string for a sed replace pattern - but you will need to escape \xXX sequences instead of each character (or convert characters to the actual bytes first, why work with all the \xXX sequences?). Or pick another tool.

Converting an ncat command on Windows from a Mac/Nix example

I'm working with the Google Healthcare API and there's a step in the walk through that uses netcat to send an HL7 message to the MLLP adapter.
(I used nmap to download ncat for Windows)
I have the adapter running locally but the command they provide is written for Mac/Nix users and I'm on Windows.
echo -n -e "\x0b$(cat hl7.txt)\x1c\x0d" | nc -q1 localhost 2575 | less
So I tried rewriting this for windows powershell:
$hl7 = type hl7.txt
Write-Output "-n -e \x0b" $hl7 "\x1c\x0d" | ncat -q1 localhost 2575 | less
When I try this, I get an error that "less" is invalid and also -q1 is also an invalid command.
If I remove -q1 and | less the command executes with no output or error message.
I'm wondering if I'm using ncat incorrectly here or the write-output incorrectly?
What is the -q1 parameter?
It doesn't seem to be a valid ncat parameter from what I've researched.
I've been following this walkthrough:
https://cloud.google.com/healthcare/docs/how-tos/mllp-adapter#connection_refused_error_when_running_locally
We're really converting the echo command, not the ncat command. The syntax for ascii codes is different in powershell.
[char]0x0b + (get-content hl7.txt) + [char]0x1c + [char]0x0d |
ncat -q1 localhost 2575
in ascii: 0b vertical tab, 1c file seperator, 0d carriage return http://www.asciitable.com
Or this. `v is 0b and `r is 0d
"`v$(get-content hl7.txt)`u{1c}`r" | ncat -q1 localhost 2575
If you want it this way, it's the same thing. All three ways end up being the same.
"`u{0b}$(get-content hl7.txt)`u{1c}`u{0d}" | ncat -q1 localhost 2575

Remove invalid UNICODE characters from XML file in UNIX?

I have a shell script that I use to remotely clean an XML file produced by another system that contains invalid UNICODE characters. I am currently using this command in the script to remove the invalid characters:
perl -CSDA -i -pe's/[^\x9\xA\xD\x20-\x{D7FF}\x{E000}-\x{FFFD}\x{10000}-\x{10FFFF}]+//g;' file.xml
and this has worked so far but now the file has new error of, as far as I can tell, 'xA0', and what happens is my perl command reaches that error in the file and erases the rest of the file. I modified my command to include xA0, but it doesn't work:
perl -CSDA -i -pe's/[^\x9\xA0\xD\x20-\x{D7FF}\x{E000}-\x{FFFD}\x{10000}-\x{10FFFF}]+//g;' file.xml
I have also tried using:
iconv -f UTF-8 -t UTF-8 -c file.xml > file2.xml
but that doesn't do anything. It produces an identical file with the same errors.
Is there a unix command that I can use that will completely remove all invalid UNICODE characters?
EDIT:
some HEX output (note the 1A's and A0's):
3E 1A 1A 33 30 34 39 37 1A 1A 3C 2F 70
6D 62 65 72 3E A0 39 34 32 39 38 3C 2F
You may use the following onliner:
perl -i -MEncode -0777ne'print encode("UTF-8",decode("UTF-8",$_,sub{""}))' file.xml
You also may extend it with warnings:
perl -i -MEncode -0777ne'print encode("UTF-8",decode("UTF-8",$_,sub{warn "Bad byte: #_";""}))' file.xml
A0 is not a valid UTF-8 sequence. The errors you were encountering where XML encoding errors, while this one is a character encoding error.
A0 is the Unicode Code Point for a non-breaking space. It is also the iso-8859-1 and cp1252 encoding of that Code Point.
I would recommend fixing the problem at its source. But if that's not possible, I would recommend using Encoding::FixLatin to fix this new type of error (perhaps via the bundled fix_latin script). It will correctly replace A0 with C2 A0 (the UTF-8 encoding of a non-breaking space).
Combined with your existing script:
perl -i -MEncoding::FixLatin=fix_latin -0777pe'
$_ = fix_latin($_);
utf8::decode($_);
s/[^\x9\xA\xD\x20-\x{D7FF}\x{E000}-\x{FFFD}\x{10000}-\x{10FFFF}]+//g;
utf8::encode($_);
' file.xml

Why piped arguments to date command are not correctly processed?

I have the problem illustrated in the following code snippet:
$ echo "2014-10-26 23:24:38.3123123" | date -d -
Sun Oct 26 00:00:00 EDT 2014
$ date -d "2014-10-26 23:24:38.3123123"
Sun Oct 26 23:24:38 EDT 2014
As you can see, the hour/min/seconds information is not picked up when I pipe in data with echo, but it is picked up when I use it as a command line argument. I am sure that there is something dumb I am not noticing, but if anyone can enlighten me on what that is it would be much appreciated!
When you write:
$ echo "2014-10-26 23:24:38.3123123" | date -d -
The space character between 2014-10-26 and 23:24:38.3123123 is treated like an argument separator and the date command takes the date string as two different arguments.
You can simply escape this space character:
$ echo "2014-10-26\ 23:24:38.3123123" | date -d -
and it works;
Sun Oct 26 23:24:38 EDT 2014

Getting around truncated "ps"

I'm trying to write a script that will find a particular process based on a keyword, extract the PID, then kill it using the found PID.
The problem I'm having in Solaris is that, because the "ps" results are truncated, the search based on the keyword won't work because the keyword is part of the section (past 80 characters) that is truncated.
I read that you can use "/usr/ucb/ps awwx" to get something more than 80 characters, but as of Solaris 10, this needs to be run from root, and I can't avoid that restriction in my script.
Does anyone have any suggestions for getting that PID? The first 80 characters are too generic to search for (part of a java command).
Thanks.
This works for me, at least on Joyent SmartMachine:
/usr/ucb/ps auxwwww
You assumption about ps behavior is incorrect. Even while you aren't logged as root, "/usr/ucb/ps -ww" doesn't truncate arguments for processes you own, i.e. for processes you can kill which are the only one you are interested in.
$ cat /etc/release
Oracle Solaris 10 9/10 s10x_u9wos_14a X86
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
Assembled 11 August 2010
$ id
uid=1000(jlliagre) gid=1000(jlliagre)
$ /usr/ucb/ps | grep abc
2035 pts/3 S 0:00 /bin/ksh ./abc aaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbb
$ /usr/ucb/ps -ww | grep abc
2035 pts/3 S 0:00 /bin/ksh ./abc aaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb ccccccccccccccccccccccccccccccccccccccccccccccccccccccc ddddddddddddddddddddddddddddddddddddddddddd
I would suggest pgrep and pkill - http://www.opensolarisforum.org/man/man1/pkill.html - instead.
Edit 0:
How about this ugly procfs hack instead:
~$ for f in /proc/[0-9]*/cmdline; do if grep -q --binary-files=text KEYWORD $f; \
> then l=`dirname $f`;p=`basename $l`; echo "killing $p"; kill $p; fi; done
I'm sure there's a shorter incantation for this but my shell-fu is a bit rusty.
Disclaimers: only tested in bash on Linux, would probably match itself too.
pargs will help here. though you'll have to iterate through all of the running procs which is a little annoying. but this will at least show you all of a procs arguments when ps would truncate them.
user#machine:(/home/user)> pargs 23097
23097: /usr/bin/bash ./test.sh aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbb
argv[0]: /usr/bin/bash
argv[1]: ./test.sh
argv[2]: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
argv[3]: bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
argv[4]: ccccccccccccccccccccccccccccccccccccccccc
ps "whatever your options" | cat
Works for me; trying to fool ps that stdout is not a tty.
I don't remember exactly about solaris and i don't have an access to it now, only tomorrow, but in any case it's better to order the fields you want — simplifies parsing.
ps -o pid,args
If the output is truncated, maybe setting the column name to long string shall help.
/usr/ucb/ps -auxww | grep <processname> or <PID>
Use the -w option (twice for unlimited width):
$ ps -w -w -A -o pid,cmd