I need to implement an alert for a prometheus metric that is being exposed by many instances of a given application running on a kubernetes cluster.
The alert has to be created in a .yaml file in the following format:
- name: some-alert-name
interval: 30s
rules:
- alert: name-alert
expr: <Expression To Make>
labels:
event_id: XXXXX
annotations:
description: "Project {{ $labels.kubernetes_namespace }} / App {{ $labels.app }} / Pod {{ $labels.kubernetes_pod_name }} / Instance {{ $labels.instance }}."
summary: "{{ $labels.kubernetes_namespace }}"
The condition to applied to the alert would be something like: givenMetricValue > 4
I have no issue in getting the metric values for all instances, as I can do it with: metricName{app=~"common-part-of-deployments-name-.*"}"
My troubles are in having a unique alert with an expression that fires if one of them satisfies the condition.
Is this possible to be done?
If so, how can I do it?
Turns out, if you want create the alert with a generic "all-fetching" expression like
metricName{app=~"common-part-of-deployments-name-.*"}"
The alert will be triggered for each deployment that the regex matches. So all you need is an alert with a generic expression.
Related
I'm using helm v 3.7.0 and I have a parent chart with a few subcharts as dependancies.
One of the subcharts has a virtual service defined as per below.
Subchart Values.yaml:
ingress:
enabled: true
host: "hostname.local"
annotations: {}
tls: []
Subchart virtual.service.yaml:
{{- if .Values.ingress.enabled -}}
apiVersion: types.kubefed.io/v1beta1
kind: FederatedVirtualService
metadata:
name: {{ template "product.fullname" . }}-web-vservice
spec:
placement:
clusterSelector: {}
template:
spec:
gateways:
- {{ template "product.fullname" . }}-web-gateway
hosts:
- {{ .Values.ingress.host }}
http:
# Website
- match:
- uri:
prefix: /
route:
- destination:
host: {{ template "product.fullname" . }}-web
port:
number: 80
{{- end }}
When I run:
helm template . --debug
It errors out with:
Error: template: virtual.service.yaml:1:14: executing "virtual.service.yaml" at <.Values.ingress.enabled>: nil pointer evaluating interface {}.enabled
If I move the enabled boolean outside of ingress and update the if statement it works.
i.e.
new values:
ingressEnabled: true
ingress:
host: "hostname.local"
annotations: {}
tls: []
new virtual service:
{{- if .Values.ingressEnabled -}}
apiVersion: types.kubefed.io/v1beta1
kind: FederatedVirtualService
The problem is that this is happening all over the place with lots of different values but only where they are nested and I cannot make all values flat.
I have virtual services being specified in exactly the same way in other projects and they work perfectly. I don't believe the issue is with how I'm defining this (unless anyone can correct me?) so I think something else is preventing helm from being able to read nested values but I don't know where to look to investigate this odd behaviour.
What would make helm unable to read nested values?
Based on the comment above, I see you were able to overcome the issue and I want to add the answer to the question for other to find.
The default values.yaml file is lower case (even though we use .Values in the template).
What you can do is either:
make the filename start with a lowercase letter or
pass the values file explicitly with with -f option
I am trying to use the module community.kubernetes.k8s – Manage Kubernetes (K8s) objects with variables from the role (e.g. role/sampleRole/vars file).
I am failing when it comes to the integer point e.g.:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec:
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
When I deploy with this format obviously it will fail at it can not parse the "reference" to the var.
Sample of error:
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
found unacceptable key (unhashable type: 'AnsibleMapping')
The error appears to be in 'deploy.yml': line <some line>, column <some column>, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
ports:
- containerPort: {{ containerPort }}
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
When I use quotes on the variable e.g. - containerPort: "{{ containerPort }}" then I get the following error (part of it):
v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Ports: []v1.ContainerPort: v1.ContainerPort.ContainerPort: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|nerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"d|..., bigger context ...|\\\\\",\\\\\"name\\\\\":\\\\\"samplegreen\\\\\",\\\\\"ports\\\\\":[{\\\\\"containerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"dnsPolicy\\\\\":\\\\\"ClusterFirst\\\\\",\\\\\"restartPolicy\\\\\"|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
I tried to cast the string to int by using - containerPort: "{{ containerPort | int }}" but it did not worked. The problem seems to be coming from the quotes, independently how I define the var in my var file e.g. containerPort: 80 or containerPort: "80".
I found a similar question on the forum Ansible, k8s and variables but the user seems not to have the same problems that I am having.
I am running with the latest version of the module:
$ python3 -m pip show openshift
Name: openshift
Version: 0.11.2
Summary: OpenShift python client
Home-page: https://github.com/openshift/openshift-restclient-python
Author: OpenShift
Author-email: UNKNOWN
License: Apache License Version 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: ruamel.yaml, python-string-utils, jinja2, six, kubernetes
Is there any workaround this problem or is it a bug?
Update (08-01-2020): The problem is fixed on version 0.17.0.
$ python3 -m pip show k8s
Name: k8s
Version: 0.17.0
Summary: Python client library for the Kubernetes API
Home-page: https://github.com/fiaas/k8s
Author: FiaaS developers
Author-email: fiaas#googlegroups.com
License: Apache License
Location: /usr/local/lib/python3.8/dist-packages
Requires: requests, pyrfc3339, six, cachetools
You could try the following as a workaround; in this example, we're creating a text template, and then using the from_yaml filter to transform this into our desired data structure:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec: "{{ spec|from_yaml }}"
vars:
spec: |
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
The solution provided by larsks works perfectly. Although I got another problem on my case where I use templates with a bit more complex cases (e.g. loops etc) where I found my self having the same problem.
The only solution that I had before was to use ansible.builtin.template – Template a file out to a remote server and simply ssh the some_file.yml.j2 to one of my Master nodes and deploy through ansible.builtin.shell – Execute shell commands on targets (e.g. kubectl apply -f some_file.yml).
Thanks to community.kubernetes.k8s – Manage Kubernetes (K8s) objects I am able to do all this work with a single task e.g. (example taken from documentation):
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
The only requirement that the user needs to have in advance is to have the kubeconfig file placed in the default location (~/.kube/config) or use the kubeconfig flag to point to the location of the file.
As a last step I use it delegate_to to localhost command e.g.
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
delegate_to: localhost
The way that this task works is that the user ssh to himself and run kubectl apply -f some_file.yml.j2 towards the LB or Master node API and the API applies the request (if the user has the permissions).
I am trying to create alerts in Prometheus on Kubernetes and sending them to a Slack channel. For this i am using the prometheus-community helm-charts (which already includes the alertmanager). As i want to use my own alerts I have also created an values.yml (shown below) strongly inspired from here.
If I port forward Prometheus I can see my Alert there going from inactive, to pending to firing, but no message is sent to slack. I am quite confident that my alertmanager configuration is fine (as I have tested it with some prebuild alerts of another chart and they were sent to slack). So my best guess is that I add the alert in the wrong way (in the serverFiles part), but I can not figure out how to do it correctly. Also, the alertmanager logs look pretty normal to me. Does anyone have an idea where my problem comes from?
---
serverFiles:
alerting_rules.yml:
groups:
- name: example
rules:
- alert: HighRequestLatency
expr: sum(rate(container_network_receive_bytes_total{namespace="kube-logging"}[5m]))>20000
for: 1m
labels:
severity: page
annotations:
summary: High request latency
alertmanager:
persistentVolume:
storageClass: default-hdd-retain
## Deploy alertmanager
##
enabled: true
## Service account for Alertmanager to use.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
create: true
name: ""
## Configure pod disruption budgets for Alertmanager
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
## This configuration is immutable once created and will require the PDB to be deleted to be changed
## https://github.com/kubernetes/kubernetes/issues/45398
##
podDisruptionBudget:
enabled: false
minAvailable: 1
maxUnavailable: ""
## Alertmanager configuration directives
## ref: https://prometheus.io/docs/alerting/configuration/#configuration-file
## https://prometheus.io/webtools/alerting/routing-tree-editor/
##
config:
global:
resolve_timeout: 5m
slack_api_url: "I changed this url for the stack overflow question"
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
#receiver: 'slack'
routes:
- match:
alertname: DeadMansSwitch
receiver: 'null'
- match:
receiver: 'slack'
continue: true
receivers:
- name: 'null'
- name: 'slack'
slack_configs:
- channel: 'alerts'
send_resolved: false
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
So I have finally solved the problem. The problem apparently was that the kube-prometheus-stack and the prometheus helm charts work a bit differently.
So instead of alertmanager.config I had to insert the code (everything starting from global) at alertmanagerFiles.alertmanager.yml.
So I am deploying my spring boot app using helm. I am following a pre-existing formula used by our company to try and accomplish this task, but for some reason I am unable.
my postgresql-secrets.yml file contains the following
apiVersion: v1
kind: Secret
metadata:
name: {{ template "codes-chart.fullname" . }}-postgresql
labels:
app: {{ template "codes-chart.name" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
SPRING_DATASOURCE_URL: {{ .Values.secrets.springDatasourceUrl | b64enc }}
SPRING_DATASOURCE_USERNAME: {{ .Values.secrets.springDatasourceUsername | b64enc}}
SPRING_DATASOURCE_PASSWORD: {{ .Values.secrets.springDatasourcePassword | b64enc}}
This picks up the values in the values.yaml file
secrets:
springDatasourceUrl: PLACEHOLDER
springDatasourceUsername: PLACEHOLDER
springDatasourcePassword: PLACEHOLDER
The place holders are being overwritten in helm using a variable override in the environment.
the secrets are referenced in the envFrom: of the codes-deployment.yaml
envFrom:
- configMapRef:
name: {{ template "codes-chart.fullname" . }}-application
- secretRef:
name: {{ template "codes-chart.fullname" . }}-postgresql
my helm file structure is as follows:
|helm
|-codes
|--configmaps
|---manifest
|----manifest-codes-configmap.yaml
|--templates
|---application-deploy-job.yaml
|---application-manifest-configmap.yaml
|---application-register-job.yaml
|---application-unregister-job.yaml
|---codes-application-configmap.yaml
|---codes-deployment.yaml
|---codes-hpa.yaml
|---codes-ingress.yaml
|---codes-service.yaml
|---postgresql-secret.yaml
|--values.yaml
|--Chart.yaml
The issues seems to be with the SPRING_DATASOURCE_URL:
if i use the private ip of the cloudsql db, then it says it is not accepting connections
if i use the jdbc url format:
ex: (jdbc:postgresql://google/<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>)
then I get an 403 authentication error.
What am I doing wrong?
403 Forbidden:
The server understood the request, but is refusing to fulfill it.
The 403 for authenticated users with insufficient permissions.
403 indicates that the resource can not be provided. This may be because it is known that no level of authentication is sufficient, but it may be because the user is already authenticated and does not have authority.
Let me add some examples:
https://www.baeldung.com/kubernetes-helm
https://medium.com/zoom-techblog/from-zero-to-kubernetes-4fd354423e6a
I am trying to figure out how to escape these pieces of a yml file in order to use with helm.
- name: SYSLOG_TAG
value: '{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}[{{ index .Container.Config.Labels "io.kubernetes.pod.name" }}]'
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
The yml file is a DaemonSet for sending logs to papertrail with instructions here for a standard kubernetes manual deployment https://help.papertrailapp.com/kb/configuration/configuring-centralized-logging-from-kubernetes/ . Here is a link to the full yml file https://help.papertrailapp.com/assets/files/papertrail-logspout-daemonset.yml .
I found some answers on how to escape the curly braces and quotes, but still can't seem to get it to work. It would be easiest if there was some way to just get helm to not evaluate each entire value.
The last I tried was this, but still results in an error.
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.pod.namespace\" "}}"["{{" index .Container.Config.Labels \"io.kubernetes.pod.name\" "}}"]''
- name: SYSLOG_HOSTNAME
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.container.name\" "}}"''
This is the error:
Error: UPGRADE FAILED: YAML parse error on templates/papertrail-logspout-daemonset.yml: error converting YAML to JSON: yaml: line 21: did not find expected key
I can hardcode values for both of these and it works fine. I don't quite understand how these env variables work, but what happens is that logs are sent to papertrail for each pod in a node with the labels from each of those pods. Namespace, pod name, and container name.
env:
- name: ROUTE_URIS
value: "{{ .Values.backend.log.destination }}"
{{ .Files.Get "files/syslog_vars.yaml" | indent 13 }}
Two sensible approaches come to mind.
One is to define a template that expands to the string {{, at which point you can use that in your variable expansion. You don't need to specially escape }}.
{{- define "cc" }}{{ printf "{{" }}{{ end -}}
- name: SYSLOG_HOSTNAME
value: '{{cc}} index .Container.Config.Labels "io.kubernetes.container.name" }}'
A second approach, longer-winded but with less escaping, is to create an external file that has these environment variable fragments.
# I am files/syslog_vars.yaml
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
Then you can include the file. This doesn't apply any templating in the file, it just reads it as literal text.
env:
{{ .Files.Get "files/syslog_vars.yaml" | indent 2 }}
The important point with this last technique, and the problem you're encountering in the question, is that Helm reads an arbitrary file, expands all of the templating, and then tries to interpret the resulting text as YAML. The indent 2 part of this needs to match whatever the rest of your env: block has; if this is deep inside a deployment spec it might need to be 8 or 10 spaces. helm template will render a chart to text without trying to do additional processing, which is really helpful for debugging.