Is there a api for ganglia? - ganglia

Hello I would like to enquire if there is an API that can be used to retrieve Ganglia stats for all clients from a single ganglia server?

The Ganglia gmetad component listens on ports 8651 and 8652 by default and replies with XML metric data. The XML data type definition can be seen on GitHub here.
Gmetad needs to be configured to allow XML replies to be sent to specific hosts or all hosts. By default only localhost is allowed. This can be changed in /etc/ganglia/gmetad.conf.
Connecting to port 8651 will get you a default XML report of all metrics as a response.
Port 8652 is the interactive port which allows for customized queries. Gmetad will recognize raw text queries sent to this port, i.e. not HTTP requests.
Here are examples of some queries:
/?filter=summary (returns a summary of the whole grid, i.e. all clusters)
/clusterName (returns raw data of a cluster called "clusterName")
/clusterName/hostName (returns raw data for host "hostName" in cluster "clusterName")
/clusterName?filter=summary (returns a summary of only cluster "clusterName")
The ?filter=summary parameter changes the output to contain the sum of each metric value over all hosts. The number of hosts is also provided for each metric so that the mean value may be calculated.

Yes, there's an API for Ganglia: https://github.com/guardian/ganglia-api
You should check this presentation from 2012 Velocity Europe - it was really a great talk: http://www.guardian.co.uk/info/developer-blog/2012/oct/04/winning-the-metrics-battle

There is also an API you can install from pypi with 'pip install gangliarest' and sets up a configurable API backed with a Redis cache and indexer to improve performance.
https://pypi.python.org/pypi/gangliarest

Related

how to use nifi.web.proxy.host and nifi.web.proxy.context.path?

I have deployed NiFi with Kerberos in a cluster and for accessing the UI I am using haproxy. I am able to access NiFi UI through the individual node URL but it's not working with the loadbalncer URL and getting following error
System Error
The request contained an invalid host header
I think it can be fixed by nifi.web.proxy.host and nifi.web.proxy.context.path parameters. I tried with this two parameters but the problem still remains.
This issue was pointed at in NiFi 1.5 NIFI-4761.
To resolve this issue, whitelist the hostname used to access NiFi using the following parameter in the nifi.properties configuration file :
nifi.web.proxy.host = <host:port>
Its a comma-separated list of allowed HTTP Host header values to consider when NiFi is running securely and will be receiving requests to a different host[:port]. For example, when running in a Docker container or behind a proxy (e.g. localhost:18443, proxyhost:443). By default, this value is blank, meaning NiFi should allow only requests sent to the host[:port] that NiFi is bound to.

Cors and database uri problem in scaling out architecture

I have react frontend and spring boot backend with mongodb behind.
I have issues with setting 2 parameters in the spring boot service.
First is address of the mongodb which is now set as localhost:27017 in the application.properties
It works at localhost but since I plan to scale out using kubernetes and docker images i would like to know how to define
It and where for the case in which I have mongo1 mongo2 and mongo3 database hosts and would like to pass all 3 URIs ?
Second issues is more tricky! React frontend doesnt work in chrome until I put allow cross origin anotation over my spring rest endpoint . I used hardcoded localhost:3000 here but when I scale it out using kubernetes this wont work if it gets data from another host in the cluster.What to do here?
To answer your first question, you can configure multi data sources, see here documentation how you can configure more than one data sources (80.2 Configure Two DataSources.
For second question you can simply wildcard CORS URL or if you know all of your front end server urls which are load balanced you can pass as list of cors url.
– * – means that all origins are allowed.
– If undefined, all origins are allowed.
RECOMMENDATION
Run your react via yarn to deploy on Apache or ngnix. Once you seted up your domain or sub domain for front end, load balanced your front end so not required to run your front end on ports..

Call nifi processor as a rest api

I want to call a Nifi custom processor as a REST Api and pass the parameters at run-time through pyspark. And retrieve the results in the response object.
Can anyone please help me in suggesting different approaches for the same.
use the following sequence of processors:
HandleHttpRequest
extract patameters
your other processors...
prepare response
HandleHttpResponse
The steps are:
Configure HandleHttpRequest processor.
Enable the required HTTP methods (GET, POST, DELETE, etc.).
Set the listening port.
Attached the Context Map to a service (the listener).
5. Enable the service and the processor.
Bonus:
If you run Nifi from a Docker container, as I do, you should get the container's IP:
docker inspect <container-name> --format='{{.NetworkSettings.IPAddress}}'
Now, you can run Postman, and the HandleHttpRequest processor will fetch it. For example:
I created a simple template to exemplify this scenario. The HTTP request's body is saved into a directory:

How to send PHP app logs directly to ELK service?

According to the documentation there are two ways to send log information to the SwisscomDev ELK service.
Standard way via STDOUT: Every output to stdout is sent to Logstash
Directly send to Logstash
Asking about way 2. How is is this achieved, especially how is the input expected?
We're using Monolog in our PHP buildpack based application and using its stdout_handler is working fine.
I was trying the GelfHandler (connection refused), SyslogUdPHandler (no error, but no result), both configured to use VCAPServices logstashHost and logstashPort as API endpoint / host to send logs to.
Binding works, env variables are set, but I have no idea how to send SwisscomDev ELK service Logstash API endpoint compatible log information from our application.
Logstash is configured with a tcp input, which is reachable via logstashHost:logstashPort. The tcp input is configured with its default codec, which is the line codec (source code; not the plain codec as stated in the documentation).
The payload of the log event should be encoded in JSON so that the fields are automatically recognized by Elasticsearch. If this is the case, the whole log event is forwarded without further processing to Elasticsearch.
If the payload is not JSON, the whole log line will end up in the field message.
For your use case with Monolog, I suggest you to use the SocketHandler (pointing it to logstashHost:logstashPort) in combination with the LogstashFormatter which will take care of the JSON encoding with the log events being line delimited.

Windows ServiceBus 1.1 for Windows Server

I did move the databases from our ServiceBus test enviroment.
I started by leaving the farm with the single node, then I moved the databases.
After rejoining the farm I see that GatewayDBConnectionString is till pointing to the old one.
I can't find any valid PowerShell command to reconfigure the value in question.
Anyone know how to fix this?
Thank you in advance.
To answer this I will need you to understand this a bit more - and hence giving a high-level overview of Service Bus 1.1 Server farm configuration:
Service Bus Server 1.1 is a platform where users can create highly-durable distributed Pub-Sub (messaging Queues/Topics) entities. In simple words - the main job of this is to translate the Compute (your VMs) and Data (your MsgContainer databases) into messaging functionality Durable Queues and Topics. So, in short - the configuration wizard or the Powershell cmdlets used to configure ServiceBus 1.1 Server will try to take the VMs and Databases from you.
The Db SBManagementDB is considered to be the authoritative source of truth for any Farm level configuration -> like Nodes that are part of the Farm (Store.Nodes), Ports opened on each of the nodes, Gateway database connection string (Cluster Config) etc. Also pl. note that - as per the Windows Server product guidelines - any information that has to be securely persisted will be encrypted - so as the Gateway DB connection String.
a) when you did New-SBFarm (with a Gateway DB connection string) - you have essentially communicated to SBMgmtDB - the Gateway DB Server, database name etc.
b) when you do Add-SBHost - again you have communicated to SBMgmtDb that you want to add one Node to this Farm
Gateway db connection string is the one place for Truth for all Gateway Services to find any run-time info -> like Container Databases, entity to container mapping etc.
again, when you do New-SBMessageContainer PSCmdlet --> you communicated to SBGatewayDB that you are adding one db
Now, with this background - lets see how the action you did above will take into effect:
- When you moved all the Databases to a different Server - you changed the Gateway Database connection string - But the Gateway connection string you had communicated to the SBManagementDB (using the New-SBFarm cmdlet) was pointing to the Old Server.
- When you removed the Node from the Farm and again Joined back - you removed one node from the configuration and re-added it - no affect :)
The ANSWER
Use Restore-SBFarm PS Cmdlet to communicate to the SBManagementDB that you changed the GW db
and then Use Restore-SBMessageContainer PS Cmdlet to communicate to Gateway DB that you changed the Container databases.
Now, add the Nodes back to this restored farm.
HTH!
Sree