how to use two grafana notification policy with multiple labels - grafana

I have grafana setup, and want to set a policy to get all the alerts generated for the namespace "myapp", However, I want to exclude alerts which are generated from the cluster "test-cluster".
below is my label setup
case-1
namespace=~myapp
cluster=test-cluster
this will not help, it still sends alerts from the cluster "test-cluster"
Case-2
namespace=~myapp
|
| nested policy with below condition
--> cluster=test-cluster
this also wont help, it give same result as case-1

Related

Grafana dashboard variable from Loki logs

I'm a beginer for Grafana Loki, and now we have a running instance which works without issue (can see the log itself), now we want to define some variables and monitors them in the dashboard.
Below is one of our log froward from promtail->loki->grafana, belongs to job "mqtt_log",
we want to extract the "534654234" and the "1" from the log as two variable, and monitor in the dashboard.
2022-11-02 12:16:23 mqtt_log 2022-11-02 12:16:23,428 - AliyunMqtt - INFO - elevator/534654234/cabin/position/: b'{"Name":"Group.Elevators{EquipmentNumber=534654234}.Cabins:0.Position","Value":"{"Group":"1"}","Representation":"Live","TimeStamp":1667362583365}'
The problem is we don't know how to define the variables, anyone can share some comments, thanks.
You can't create dynamic (only hardcoded) dashboard variable from the parsed logs. You can do that only from existing labels.

How to get number of pods in AKS that were active in a given timeframe

So, I'm having an unexpectedly hard time figuring this out. I have a kubernetes cluster deployed in AKS. In Azure (or Kubernetes dashboard), How do I view how many active pods there were in a given time frame?
Updated 0106:
You can use the query below to count the number of active pods:
KubePodInventory
| where TimeGenerated > ago(2d)//set the time frame to 2 days
| where PodStatus == "Running"
| project PodStatus
| summarize count() by PodStatus
Here is the test result:
Original answer:
If you have configured monitoring, then you can use kusto query to fetch it.
Steps as below:
1.Go to azure portal -> your AKS.
2.In the left panel -> Monitoring -> click Logs.
3.In the table named KubePodInventory, there is a field PodStatus which you can use it as filter in your query. You can write your own kusto query and specify the Time range via portal(by clicking the Time range button) or in query(by using ago() function). You should also use the count() function to count the number.

EFK - Have preconfigured filter by container that will appear in Kibana

I've got the EFK stack installed on kubernetes following this addon: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
What I want to achieve is having all the logs of the same pod together, and even maybe some other filters. But I don't want to configure the filter in kibana with the GUI, I'd like to have them preconfigured in the way that some of my known containers (the containers that I want to monitorize) are configured previously and installed when kibana rather than using an additional step to import/export them. I'd like to have the predefined filters in a way that, immediately after the installation, I can go to "discover", select the pod name that I want to see and then I see all the logs in the format:
In my understanding, that being the first time that I use this tech is near to zero, the in the fluentd-configmap.yml with the correct parameters should do the trick, but none of my tries has altered what I see in kibana.
Am I looking in the correct place for doing this or this filter is not for this use and I'm completely wasting my time? How could I do this filter in any case?
Any help, even if is only a hint, would be appreciated.

How to cleanup database in kubernetes after service is deleted?

Consider the following things are already satisfied.
1. The maria-db is running in separate pod and is pre-installed.
2. When we deploy a new service it is able to connect to maria-db and create SCHEMA in it.
But the final requirement is when the service is deleted then it should cleanup the SCHEMA.
I have tried writing a job with post-delete tag.
So just a thought, you could possible do this by using a Admission Control i.e your logic could possible be along the lines of:
Delete Pod Requested --> Hits Addmission Control --> Addmission Controller Removes Schema --> Pod Deleted
However this would be a lot of custom code and you would need a way to identify the Schema that that particular service has created in the DB.

Visualizing running services in Grafana

How to add query to visualize service running on Ubuntu using Grafana?
I tried to add conditions in where tag like service=cron, but it's not working.
FROM default processes WHERE host = ubuntu1604 AND service = cron
SELECT field(total)mean() GROUP BY time(10s)fill(null)F
FORMAT AS Time series
ALIAS BY Service
After adding service condition I'm not able to visualize the graph.
Can you indicate exactly what the data source is? Is it InfluxDB? If so, you may need single quotes around what you are checking in where, such as:
host = 'ubuntu1604' and service = 'cron'