Monitor smart power plug with Prometheus / Grafana - rest

I barely managed to set up Prometheus & Grafana on my new Raspberry Pi (running Raspbian). Now I would like to monitor a smart power plug with a REST API. That means I could send a curl command and receive some data back:
$ curl --location --request GET '[Switch IP]/report'
{
"power": 35.804927825927734,
"relay": true,
"temperature": 21.369983673095703
}
However I am at a loss as to how to get this data automagically queried and parsed by Prometheus. My Google Fu is failing me as all the results explain how to query Prometheus. Any hints would be greatly appreciated.

It's non-trivial, unfortunately.
Prometheus "scrapes" HTTP endpoints and expects these to publish metrics using Prometheus' exposition format. This is a simple text format that lists metrics with their values. I was unable to find a good example.
You would need to have an "exporter" that interacts with your devices and creates metrics (in the Prometheus format) and publishes these on an HTTP endpoint (not REST just a simple text page).
Then, you'd point the Prometheus server at this exporter's endpoint and Prometheus would periodically read the metrics representing your device and enable you to interact with the results.

There are a few possible approaches to make this a bit more straightforward:
https://github.com/ricoberger/script_exporter
https://github.com/grafana/agent/issues/1371 — discussing a possible script_exporter integration
https://github.com/prometheus/pushgateway — Prometheus’ push gateway
https://github.com/prometheus/blackbox_exporter — Prometheus’ blackbox exporter
https://medium.com/avmconsulting-blog/pushing-bash-script-result-to-prometheus-using-pushgateway-a0760cd261e — this post shows something similar

Related

does postgresql has built in open metrics?

Does anyone know whether PostgreSQL has built-in /metrics (or something like that)?
I've searched through the web and all I found was third party open source tools that send metrics to Prometheus
Thanks :)
Unfortunately, PostgreSQL doesn't have any /metrics endpoint. All sorts of metrics can be obtained through SQL queries from system tables.
Here is the list of monitoring tools.
For Prometheus, there is a pretty good exporter.

REST API for getting performance metrics for an HDInsight cluster?

I am looking for REST API that will allow me to fetch performance metrics for a given HDInsight (Hadoop/Linux) cluster -- such as amount or percentage of memory used by the cluster, cpu usage etc. But I haven't come across anything specific. The only closest link I have found is this. But this too doesn't have any reference to getting performance metrics. Is this info even exposed as REST API ?
According to my understanding, you want to get the metrics of the cluster. If so, you can use the following rest api to get it. For more details, please refer to the document and article
Method: GET
URL: https://<clustername>.azurehdinsight.net//api/v1/clusters/<cluster-name>?fields=metrics/load[<start time>,<end time>,<step>]
Headers : Authorization: Basic <username password>
For example:
Get CPU usage

Application Performance monitoring on Swisscom Application Cloud

I am investigating options for monitoring our installation in Swisscom's cloud-foundry. My objectives are the following:
monitor performance indicators for deployed application (such as cpu, disk, memory)
monitor performance indicators for services (slow queries, number of queries, ideally also some metrics on hitting quotas)
So far, I understand the options are the following (including some BUTs):
I used a very nice TOP cf-plugin (github)
This works very well. It seems that it registers itself to get the required firehose nozzles and consume data.
That is very useful for tracing / ad-hoc monitoring, but not very good for a serious infrastructure monitoring.
Another way I found is to use firehose-syslog solution.
This can be deployed as an app to (as far as I understand) do the job in similar way, as the TOP cf plugin.
The problem is, that it requires registered client, so it can authenticate with the doppler endpoint. For some reason, the top-cf-plugin does that automatically / in another way.
Last option i am considering is to build the monitoring itself to the App (using a special buildpack)
That can be for example done with Datadog. But it seems to also require a dedicated uaa client to register the Nozzle.
I would like to check, if somebody is (was) on the similar road, has some findings.
Eventually I would like to raise the following questions towards the swisscom community support:
is it possible to register uaac client to be able to ingest events through the firehose nozzle from external service? (this requires admin credentials if I was reading correctly)
is there an alternative way to authenticate with the nozzle (for example using a special user and his authentication token?)
is there any alternative to monitor the CF deployments in Swisscom? Eventually, is there a paper, blogpost or other form of documentation, that would be helpful in this respect (also for other users of AppCloud)?
Since it requires admin permissions, we can not give out UAA clients for the firehose.
However, there are different ways to get metrics in context of a user.
CF API
You can obtain basic metrics of a specific app by polling the CF API:
https://apidocs.cloudfoundry.org/5.0.0/apps/get_detailed_stats_for_a_started_app.html
However, since you have to poll (and for each app), it's not the recommended way.
Metrics in syslog drain
CF allows devs to forward their logs to syslog drains; in more recent versions, CF also sends metrics to this syslog drain (see https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html#container-metrics).
For example, you could use Swisscom's Elasticsearch service to store these metrics and then analyze it using Kibana.
Metrics using loggregator (firehose)
The firehose allows streaming logs to clients for two types of roles:
Streaming all logs to admins (which requires a UAA client with admin permissions) and streaming app logs and metrics to devs with permissions in the app's space. This is also what the cf logs command uses. cf top also works this way (it enumerates all apps and streams the logs of each app).
However, you will find out that most open source tools that leverage the firehose only work in admin mode, since they're written for the platform operator.
Of course you also have the possibility to monitor your app by instrumenting it (white box approach), for example by configuring Spring actuator in a Spring boot app or by including an agent of your favourite APM vendor (Dynatrace, AppDynamics, ...)
I guess this is the most common approach; we've seen a lot of teams having success by instrumenting their applications. Especially since advanced monitoring anyway requires you to create your own metrics as the firehose provided cpu/memory metrics are not that powerful in a microservice world.
However, option 2. would be worth a try as well, especially since the ELK's stack metric support is getting better and better.

Logging Kubernetes with an external ELK stack

Is there any documentation out there on sending logs from containers in K8s to an external ELK cluster running on EC2 instances?
We're in the process of trying to Kubernetes set up and I'm trying to figure out how to get the logging to work correctly. We already have an ELK stack setup on EC2 for current versions of the application but most of the documentation out there seems to be referring to ELK as it's deployed to the K8s cluster.
I am also working on the same cause.
First you should know what driver is being used by your docker containers to manage the logs (json driver/ journald etc - read here).
After that you should use some log collector in your architecture to send the logs to the Logstash endpoint. You can use filebeat/fluent bit. They are light weight alternatives to logstash/fluentd respectively. You must use one of them and not directly send your logs to logstash via syslog since these log shippers have a special functionality of enriching your logs with kubernetes metadata of the respective containers.
There might be lot of challenges after that. Parsing log data (multiline logs for example) etc. For an efficient pipeline, it’s better to do most of the work (i.e. extracting the date object from the logs etc) at the log sender side, than using the common logstash for this purpose that might be a bottle-neck.
Note that in case the container logs are not sent to stdout/stderr but written else-where, you might need to run filebeat/fluent-bit as side-car with your containers.
As for the links for documentation are concerned, I myself didn’t find anything documented in a single place on this, but the keywords that I mentioned over, reading about them I got to know many things.
Hope this helps.

Flume Metrics through REST API

I'm running hortonworks 2.3 and currently hooking into the REST API through ambari to start/stop the flume service and also submit configurations.
This is all working fine, My issue is how do I get the metrics?
Previously I used to run an agent with the parameters to produce the metrics to a http port and then read them in from there using this:
-Dflume.root.logger=INFO,console
-Dflume.monitoring.type=http
-Dflume.monitoring.port=XXXXX
However now that Ambari kicks off the agent I no longer have control over this.
Any assistance appreciated :-)
Using Ambari 2.6.2.0,
http://{ipadress}:8080/api/v1/clusters/{your_cluster_name}/components/?ServiceComponentInfo/component_name=FLUME_HANDLER&fields=host_components/metrics/flume/flume
gives flume metrics breakdown by components.
Found the answer by giving a shot (and doing some cropping) to the API call provided to this JIRA issue (which complains about how slow fetching flume metrics is) https://issues.apache.org/jira/browse/AMBARI-9914?attachmentOrder=asc
Hope this helps.
I don't know if you still need the answer. That happens because Hortonworks, by default, disable JSON monitoring, they use their own metric class to send the metrics to Ambari Metrics. While you can't retrieve it from Flume directly, you still can retrieve it from Ambari REST API: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md.
Good luck,