How to store data in nagios from a kafka consumer - apache-kafka

I am very new to the Nagios system. I have installed and up nagiosXI and nagios log server on my Linux server.
I have a Kafka cluster that has many JSON items in it. And also a consumer which can read data from it. my data is location-based. This means that the JSON consists of multiple fields which the most important field is the geolocation data and finally I want to show the store data(in Nagios) on a map.
I searched the net (NagiosXI API doc and Nagios log server API doc) but I didn't find any useful thing to know how to store(or send) data to the Nagios.

Related

How to monitor over 500+ servers using Grafana from SQL server as data source

Currently we're monitoring our SQL servers running in Windows platform via MS SQL server reporting services using shared data sources. To confirm what I mean, we don't store data at centralized server to monitor over 500 target servers. We keep monitoring data on local SQL database servers and use shared data source in SSRS to create dashboards.
Now in our firm we're encouraged to use Grafana as dashboard since they have purchased or running some Grafana server licensing. What I know of Grafana instance is that it can be given to us to monitor SQL servers as described above.
My question is how would Grafana dynamically connect to those 500 plus servers? I see it creates data source once but how will I change or create multiple data sources when I have around 1000 servers to monitor?
Please suggest guide.
You may have to code a bit and use data source provisioning and/or Grafana datasource API for it to pickup the new data source.
If you could set up a system (user-data/ init script/IaC) where this API is called everytime a new server comes up, then you will be able to maintain the data sources without maintainance.

Kafka Connect: Error detection when worker fails

I'am submitting a connector to kafka. The connector created is sftp connector. Now when the password is wrong the connector sends back success response when the connector fails. The password is wrong response is not given at that time. This is a single scenario there could be mutliple scenarios like this. Now when I use the <host>/connectors/<connector-name>/status, I get the error saying failed to establish connection. But this endpoint has a little delay. If I'am immediately trying after creating the connector, I may not get any response(404).
What is the proper way of handling this using the status api call.Is there any delay that needs to be used before firing this API. Or can it be handled while submitting the connector to API?
When you create the connector, it naturally needs to load the JAR(s) responsible for the tasks, then distribute the tasks to actually start the connector code (which is responsible for connecting to the SFTP server with the connection details).
Therefore, the delay is natural, and there's no way to know your connection details are incorrect unless you try to use them before launching the connector.

Mirth Connect send old messages when changing server

I have a Mirth application installed in Ubuntu server. I try to move the application from one server to another server (DRC server). When I moved the application, somehow the Mirth keep sending old messages to the channel.
The source of sending channel is using Database Reader and connecter type for destinations is using TCP Sender. Im using Mirth Connect version 3.5.2
Does anyone know why this is happening. Is there any log files that I need to clear when moving the application from one server to another?
This can happen for several reasons. Application logic, queued messages. My guess is you moved appdata directory along with installation, if so you must be seeing similar stats from where you moved.
Mirth stores all channels information, transactions etc. by default under appdata folder. If you are using default settings it'll use derby db. You can connect to that DB with any DB client support JDBC. i.e.
SQuirelL or DB Visualizer and that can give you an idea what's happening.
I recommend you to make a clear setup. Then, you can export/import your channels into your new environment. You can also consider using any other DB, oracle/sqlserver/mysql.. for Mirth. Current version is 3.9.10 and it has better support for DBs other than derby.
As mentioned in the comments your application logic also matters.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.

Installing and setting up logstash

I need to use Logstash to parse data from custom log files (generated from the our application). I have a tomcat server and mongodb. After going through the documentation online, I'm still unclear as to how to use the different input sources. There is a community based mongoDB database, but I'm unclear as to how use it.
How can I set up/ where should I start to begin using logstash parse logs from files?