Grafana Prometheus merge two columns - merge

I use the sflow-rt-exporter (https://github.com/sflow-rt/prometheus) to collect my traffic on my switches.
I then created a Grafana table where I see the traffic seperated in "source", "destination" and "traffic".
Now I would like to create a table in which the columns "source" and "destination" are in one column. So that it doesn't matter if the traffic went from or to this server.
Example:
Source | Destination | Traffic
123.4.5.6 | 234.5.6.7 | 200B
234.5.6.7 | 123.4.5.6 | 500B
should become
IP | Traffic
123.4.5.6 | 700B
After a week of trying I finally give up and hope that one of you can help me :)
Thanks in advance.
Greetings
L1nk27

The key to your request is that you want to aggregate on distinct dimensions (labels) which is not possible with Prometheus.
Therefore, you must create another dimension (let call it IP) to collect the sides (source and destination). This can be done using the label_replace function:
This will copy source label into IP:
label_replace(traffic, "IP", "$1", "source", "(.*)")
This will copy destination label into IP:
label_replace(traffic, "IP", "$1", "destination", "(.*)")
Note that that you cannot have duplicates as the original values are kept (this is important). Then you can use the OR binary operator to concatenate them and then sum over the IP label:
sum by(IP) (
label_replace(traffic, "IP", "$1", "source", "(.*)")
OR
label_replace(traffic, "IP", "$1", "destination", "(.*)"))

Related

Unable to set Disk usage alert on grafana

In multiple VM machine (15+) I use TIG framework (Telegraf, Influxdb and grafana) to monitoring system stats like (CPU, RAM, Disk etc)
So data is exported via telegraf and stored in InfluxDB which is further use as datasource in Grafana.
The problem I m facing is setting up alert on any system metric
In Query section I uses Raw Query like this
Disk
SELECT last(used_percent) AS PercentageUsed FROM "disk" WHERE
"host" =~ /$server$/ AND "path" = '/' AND $timeFilter GROUP BY
time($interval), "host", "path"
CPU
SELECT mean("usage_user") AS "user" FROM "cpu" WHERE ("cpu" =
'cpu-total') AND host =~ /$server$/ AND $timeFilter GROUP BY
time($interval), host ORDER BY asc
It is my requirement to use variable for simmilar stat data of all VM in one graph
But the problem is I am unable to configure any alert on this query due to Error
Template variables are not supported in alert queries
It does not sound like that is possible per this thread.
This means you will either have to have multiple panels - one per template variable value per panel or use regex instead of the template variable.

Increase Filter Limit in Apache Superset

I am trying to create a filter for a field that contains over 5000 unique values. However, the filter's query is automatically setting a limit of 1000 rows, meaning that the majority of the values do not get displayed in the filter dropdown.
I updated the config.py file inside the 'anaconda3/lib/python3.7/site-packages' directory by increasing the DEFAULT_SQLLAB_LIMIT and QUERY_SEARCH_LIMIT to 6000, however this did not work.
Is there any other config that I need to update?
P.S - The code snippet below shows the json representation of the filter where the issue seems to be coming from.
"query": "SELECT casenumber AS casenumber\nFROM pa_permits_2019\nGROUP BY casenumber\nORDER BY COUNT(*) DESC\nLIMIT 1000\nOFFSET 0"
After using the grep command to find all files containing the text '1000', I found out the the filter limit can be configured through the filter_row_limit in viz.py

Combine Grafana metrics with mismatched labels

I have two metrics (relating to memory usage in my Kubernetes pods) defined as follows:
kube_pod_container_resource_limits_memory_bytes{app="kube-state-metrics",container="foo",instance="10.244.0.7:8080",job="kubernetes-endpoints",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",namespace="test",node="aks-nodepool1-25518080-0",pod="foo-cb9bc5fb5-2bghz"}
container_memory_working_set_bytes{agentpool="nodepool1",beta_kubernetes_io_arch="amd64",beta_kubernetes_io_instance_type="Standard_A2",beta_kubernetes_io_os="linux",container_name="foo",failure_domain_beta_kubernetes_io_region="westeurope",failure_domain_beta_kubernetes_io_zone="1",id="/kubepods/burstable/pod5b0099a9-eeff-11e8-884b-ca2011a99774/eeb183b21e2b3226a32de41dd85d7a2e9fc8715cf31ea7109bfbb2cae7c00c44",image="#sha256:6d6003ba86a0b7f74f512b08768093b4c098e825bd7850db66d11f66bc384870",instance="aks-nodepool1-25518080-0",job="kubernetes-cadvisor",kubernetes_azure_com_cluster="MC_test.planned.bthbygg.se_bthbygg-test_westeurope",kubernetes_io_hostname="aks-nodepool1-25518080-0",kubernetes_io_role="agent",name="k8s_foo_foo-cb9bc5fb5-2bghz_test_5b0099a9-eeff-11e8-884b-ca2011a99774_0",namespace="test",pod_name="foo-cb9bc5fb5-2bghz",storageprofile="managed",storagetier="Standard_LRS"}
I want to combine these two into a percentage, by doing something like
container_memory_working_set_bytes{namespace="test"}
/ kube_pod_container_resource_limits_memory_bytes{namespace="test"}
but that gives me no data back, presumably because there are no matching labels to join the data sets on. As you can see, I do have matching label values, but the label names don't match.
Is there somehow I can formulate my query to join these on e.g. pod == pod_name, without having to change the metrics at the other end (where they are exported)?
You can use PromQL label_replace function to create a new matching label from the original labels.
For instance, you can use the below expression to add a container_name="foo" label to the first metric which can be used to do the join:
label_replace(
kube_pod_container_resource_limits_memory_bytes,
"container_name", "$1", "container", "(.*)")
You can use the above patern to create new labels that can be used for the matching.

How do you update all column values in Cassandra without specifying the keys?

Let's say I have the following table(only bigger):
key | type
----------------
uuid1 | blue
uuid2 | red
uuid3 | blue
What I want to be able to do is change everything that is blue to green. How would I do this without specifying all the UUIDs with the CLI or CQL?
You have a couple choices:
You can put a secondary index on the "type" column, then query all items equal to "blue". Once you have those you'll have all their keys, and you can do a batch mutation to set all the values to "green".
You can use the Hadoop integration to read in all the columns, then output the updated data in your reducer. Pig would be a good choice for this type of work.

Replace Single Column Value in Single Row in Sqlite with iPhone App

I just started work with sqlite in iPhone application. Now the Question is I have 3 columns in Table. ID,Channel_Name and Channel_IP. Here is table Example.
ID | Channel_Name | Channel_IP
1 | XYZ | http://0.0.0.0/indez.sdp/playlist.m3u8
2 | ABC | http://0.0.0.0/index.sdp/playlist.m3u8
Now i Just want to update the IP address in Channel_IP Column not the whole only the IP Address in the URL Link(like update 0.0.0.0 to 1.1.1.1 only IP Address). Also Search on Google but not found any relevant solution so if any know please let know.
Thank You
USE below query
UPDATE yourtableName SET Channel_IP = replace(Channel_IP, '0.0.0.0', '1.1.1.1');