Nagios - installing custom plugin on Windows 7 that executes and sends back data to the server - plugins

I have setup Nagios 4 core on ubuntu machine and I have installed NSClient++ on windows 7 machine. For the out-of-box monitoring like CPU, memory , etc. everything works.
I also have written an EXE in .Net that gathers some metrics on the windows machine and the hope is that NSClient on the windows machine would execute this EXE and marshal these output back to the server. The problem is that I don't know how to install the plug in. Do you install it on the server? On the client? Both? If so where? Needless to say that ubuntu (where the nagios server is) shouldn't try to execute the .net EXE.
When I look at the configuration files on the server, I see that the nagios server uses Check_NT for communicating with the NSclient. We have a syntax like Check_nt!blah. Will I need to use the same syntax when executing my .Net EXE which is not part of the core?
I hardly found any detailed documentation as how to install a windows plugin and have the server and client talk to each other. So either it is extremely easy or extremely complicated. I also looked at some YouTube videos ...there is nothing there for the problem that I"m facing.
Any help is appreciated. Thanks all!

You'll want to define it as an NRPE check on your monitor, then define the check using the same name # NSClient++ on Windows. The NSClient++ configuration has a section for NRPE handlers, just for this (source: op5 kb):
[NRPE Handlers]
The nrpe handlers provide a way to execute any custom plugin/check command on the monitored Windows server. In this section you configure all the commands that should be available.
Adding a custom NRPE command to NSClient++ follow this syntax:
command[my_custom]=c:\mycustomdir\my_prog.exe
To test the check from your monitoring system you can use NRPE from the CLI to call my_custom:
./check_nrpe -H 10.0.0.1 -c my_custom
And then define the service in your Nagios config like so:
define service{
use generic-service
host_name windowshost
service_description CPU Load
check_command check_nrpe!my_custom
}
You may need to do some extra work to format the output correctly. In BASH, exit 1 and exit 0 are commonly used to indicate state, IE: OK/Critical (source), and you may find it easier to augment your EXE's output with some simple scripting like that.

Related

How can I copy script from Rundeck server to a brand-new Windows Server

One of the requirements is to keep remote Windows Server intact.
No third party software allowed (no WinSCP, etc).
So we configure Windows Server with WinRM and allow remote access, AllowUnencrypted=true, Auth basic=true, etc...
Then we create job and execute command on Windows server like "ifconfig" successfully.
When it comes to executing inline script or copying file - Rundeck is trying to copy script/file to remote Windows server.
By default:
plugin.script-copy.default.command=get-services
where "get-services" seems to be free-form text rather than executable.
If we want to use SCP or SSH instead, here we have problem -> Windows Server doesn't have WinSCP or SSH or Python installed by default.
Is there any way to copy/deliver script to target/remote Windows Server 2008 using embedded capabilities only (no third-party software allowed) ?
Versions:
Rundeck 2.6.2 running on Linux
Windows Server 2008 R2 Enterprise, Service Pack 1
Thank you.
You can use the WinRM plugin (AKA "Overthere WinRM"), configure it, and use the copy file step on your job workflow (keep in mind that you need the 1.3.4 WinRM plugin at least which support copy file).
You need to download the plugin and put it in Rundeck the libext directory.
Add the Windows resources.xml entry (for "Overthere" WinRM plugin):
<node name="windows" description="Windows node" tags="" hostname="192.168.1.81" osArch="x86" osFamily="windows" osName="Windows 2008R2" osVersion="2008" username="user" winrm-protocol="http" winrm-auth-type="basic" winrm-cmd="CMD" winrm-password-storage-path="keys/winpasswd"/>
Set WinRM as your default node executor / default node file copier, and use the copy file step on your workflow like this.
So, this is important: the WinRM plugin isn't in active development (and Rundeck 2.6 branch is out of support/maintenance), the best way to deal with this is to move to the latest Rundeck version and use the PyWinRM plugin (out of the box with Rundeck, on active development and easiest to configure compared by the old "Overthere" WinRM plugin) and use the copy step as the same way.

Divolte-collector with MAPR, Storm, Kafka and Cassandra

I am not sure if I can get help for this on here, but I thought it was worth a try.
I have 3 node cluster on AWS, I am running MAPR M3 , I installed Storm, Kafka and Divolte-collector and Cassandra. I would like try some of the clickstream examples and I am running into an issue with the tcp-consumer example. Also being quite new to java and distributed processing I have some clarification questions. Again I am not quite sure where to post this because I feel like this is divolte-collector specific and I also have some gaps in my understanding of the javadoc concept and the building and running of jar files; but I figured someone could point me to some resources or help with some clarifications. I can't get the json string to appear in the console running netcat socket listening for clicks:
Divolte tcp-kafka-consumer example
Everything works until the netcat part step 7 and my knowledge gap is with step 6.
Step 1: install and configure Divolte Collector
Install works and hello world click collections is promising :-)
Step 2: download, unpack and run Kafka
# In one terminal session
cd kafka_2.10-0.8.1.1/bin
./zookeeper-server-start.sh ../config/zookeeper.properties
# Leave Zookeeper running and in another terminal session, do:
cd kafka_2.10-0.8.1.1/bin
./kafka-server-start.sh ../config/server.properties
No erros plus tested kafka examples so seems to working as well
Step 3: start Divolte Collector
Go into the bin directory of your installation and run:
cd divolte-collector-0.2/bin
./divolte-collector
Step 3 no hitch, can test default divole-collector test page
Step 4: host your Javadoc files
Setup a HTTP server that serves the Javadoc files that you generated or downloaded for the examples. If you have Python installed, you can use this:
cd <your-javadoc-directory>
python -m SimpleHTTPServer
Ok so I can reach the javadoc pages
Step 5: listen on TCP port 1234
nc -kl 1234
Note: when using netcat (nc) as TCP server, make sure that you configure the Kafka consumer to use only 1 thread, because nc won't handle multiple incoming connections.
Tested netcat by opening port and sending messages so I figured I don't have any port issues on AWS.
Step 6: run the example
cd divolte-examples/tcp-kafka-consumer
mvn clean package
java -jar target/tcp-kafka-consumer-*-jar-with-dependencies.jar
Note: for this to work, you need to have the avro-schema project installed into your local Maven repository.
I installed the avro-schema with mvn clean install in avro project that comes with the examples. as per instructions here
Step 7: click around and check that you see events being flushed to the console where you run netcat
When you click around the Javadoc pages, you console should show events in JSON format similar to this:
I don't see the clicks in my netcat window :(
Investigating the issue I viewed the console and network tabs using chrome developer tools it seems divolte is running, but I am not sure how to dig further. This is the console view. Any ideas or pointers?
Thanks anyways
Initializing Divolte.
divolte.js:140 Divolte base URL detected http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8290/
divolte.js:280 Divolte party/session/pageview identifiers ["0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh"]
divolte.js:307 Module initialized. Object {partyId: "0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", sessionId: "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", pageViewId: "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh", isNewPartyId: false, isFirstInSession: falseā€¦}
divolte.js:21 Signalling event: pageView 0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh0
allclasses-frame.html:9 GET http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8000/resources/fonts/dejavu.css
overview-summary.html:200 GET http://localhost:8290/divolte.js net::ERR_CONNECTION_REFUSED
(Intro: I work on Divolte Collector)
It seems that you are running the example on an AWS instance somewhere. If you are using the pre-packaged JavaDoc files that come with the examples, they have hard-coded the divolte location as http://localhost:8290/divolte.js. So if you are running somewhere other than localhost, you should probably create your own JavaDoc for the example, using the correct hostname for the Divolte Collector server.
You can do so using this command. Be sure to run it from the directory where you source tree is rooted. And of course change localhost for the hostname where you are running the collector.
javadoc -d YOUR_OUTPUT_DIRECTORY \
-bottom '<script src="//localhost:8290/divolte.js" defer async></script>' \
-subpackages .
As an alternative, you could also just try to run the examples locally first (possibly in a virtual machine, if you are on a Windows machine).
It doesn't seem there is anything MapR specific with the issue that you are seeing so far. The Kafka based examples and pipeline should work in any environment that has the required components installed. This doesn't touch MapR-FS or anything else MapR specific. Writing to the distributed filesystem is another story.
We don't compile Divolte Collector against MapR Hadoop currently, but incidentally I have given it a run on the MapR sandbox VM. When installing from the RPM distribution, create a /etc/divolte/divolte-env.sh with the following env var setting:
HADOOP_CONF_DIR=/usr/share/divolte/lib/guava-18.0.jar:/usr/share/divolte/lib/avro-1.7.7.jar:$(hadoop classpath)
Obviously this is a bit of a hack to get around classpath peculiarities and we hope to provide a distribution compiled against MapR that works out of the box in the future.
Also, you need Java 8 to run Divolte. If you install this from the Oracle RPM, add the proper JAVA_HOME to divolte-env.sh as well, e.g.:
JAVA_HOME=/usr/java/jdk1.8.0_31
With these settings I'm able to run the server and collect Avro files on MapR FS, create a external Hive table on those files and run a query.

Is it possible to doa layer 2 Packet Capture in Powershell

Is it possible to capture Layer 2 traffic using powershell? I've seen methods that use sockets, but they only seem to capture traffic on Layer 3 and higher. I want to look at Ethernet frames; but I'm not sure if it can be done in powershell. Is it possible to do this without installing any extra software/drivers on a system (maybe using a dll or something)?
If you just need portable without being silent you could use the portable Wireshark to "temporarily install" the needed drivers then use Wireshark's command line switches to script it with powershell.
One other option is if WinPcap is already installed (you could script the install and uninstall in your powershell file) you could use a wrapper library like Pcap.Net which would allow you to communicate directly to the driver via your script without going through Wireshark.

replacing telnet with ssh

I have some programs that use the Net::Telnet module to connect to several servers. Now the administrators have decided to replace the Telnet service for SSH, keeping everything else like before (for example the user accounts)
I've taken a look at Net::SSH2 and I see that I would have to change most part of the programs. Do you know of other SSH modules, better suited for this same replacement?
The client is a Windows box (ActiveState Perl or Cygwin Perl)
Net::OpenSSH!
And check the chapter about how to integrate it with Net::Telnet.
Thanks for your suggestions, but I finally used Net::SSH::Perl on ActivePerl for Windows
Pros:
quite similar to Net::Telnet. There is no close method, but instead of $host->close you can do $host->cmd("exit")
native Perl implementation
Cons:
each cmd() call has a different state, for example it doesn't keep the current directory between calls, like Net::Telnet did
needs a modification in the module code to work on Windows, see: https://rt.cpan.org/Public/Bug/Display.html?id=18154
cmd("su - user") doesn't work, but cmd("su - user -c 'commands'") does

Deploy EAR file to WAS 7 from command line

I need to deploy an EAR file that is located in sever A to a WebSphere Server located in server B. I need to know how to deploy the EAR from server A to my WAS through command line. I have seared the web but found results only fro WAS 6 (i have WAS 7).
does any one know how to deploy an EAR to WAS (in a different server) through command line?
I assume both servers are standalone. If so, use WAS_HOME/bin/wsadmin on server A, and specify the RMI host/port for serverB. If not, specify the host/port of the deployment manager for serverB.
wsadmin -host serverB.host.com -port serverBRMIPortNumber -c '$AdminApp install /path/to/localfile.ear {...options...}'
Note, this is UNIX syntax; for Windows syntax, use "double quotes". Alternatively, you can omit the -c and use interactive mode, or you can use -f file.jacl. Jython scripting is available with -lang jython. See the following for AdminApp install options (e.g., -appname or -usedefaultbindings):
http://publib.boulder.ibm.com/infocenter/wasinfo/fep/topic/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/rxml_taskoptions.html
You should really consider a nodeagent, that would make all of this go away. I'm assuming you're not in a clustered environment, otherwise a simple push to and synch of a nodeagent would do the trick.
The answer above is correct, but you could also simply FTP the package to be deployed to serverB and just use wsadmin to install locally, as well.