How to insert a secret in alertmanager configuration file - kubernetes

I provisioned alertmanager using Helm (and ArgoCD).
I need to insert smtp_auth_password value but not as a plain text.
smtp_auth_username: 'apikey'
smtp_auth_password: $API_KEY
How can I achieve it? I heard about "external secret" but this should be the easiest way?

Solution
if you use prometheus-community/prometheus which includes this alertmanager chart as a dependency, then you can do the following:
create secret in the same namespace where your alertmanager pod is running:
k create secret generic alertmanager-secrets \
--from-literal="opsgenie-api-key=YOUR-OPSGENIE-API-KEY" \
--from-literal="slack-api-url=https://hooks.slack.com/services/X03R2856W/A14T19TKEGM/...."
mount that secret via use of extraSecretMounts
alertmanager:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
# contains secret values for opsgenie and slack receivers
extraSecretMounts:
- name: secret-files
mountPath: /etc/secrets
subPath: ""
secretName: alertmanager-secrets
readOnly: true
use them in your receivers:
receivers:
- name: slack-channel
slack_configs:
- channel: '#client-ccf-ccl-alarms'
api_url_file: /etc/secrets/slack-api-url <-------------------THIS
title: '{{ template "default.title" . }}'
text: '{{ template "default.description" . }}'
pretext: '{{ template "slack.pretext" . }}'
color: '{{ template "slack.color" . }}'
footer: '{{ template "slack.footer" . }}'
send_resolved: true
actions:
- type: button
text: "Query :mag:"
url: '{{ template "alert_query_url" . }}'
- type: button
text: "Silence :no_bell:"
url: '{{ template "alert_silencer_url" . }}'
- type: button
text: "Karma UI :mag:"
url: '{{ template "alert_karma_url" . }}'
- type: button
text: "Runbook :green_book:"
url: '{{ template "alert_runbook_url" . }}'
- type: button
text: "Grafana :chart_with_upwards_trend:"
url: '{{ template "alert_grafana_url" . }}'
- type: button
text: "KB :mag:"
url: '{{ template "alert_kb_url" . }}'
- name: opsgenie
opsgenie_configs:
- send_resolved: true
api_key_file: /etc/secrets/opsgenie-api-key <-------------------THIS
message: '{{ template "default.title" . }}'
description: '{{ template "default.description" . }}'
source: '{{ template "opsgenie.default.source" . }}'
priority: '{{ template "opsgenie.default.priority" . }}'
tags: '{{ template "opsgenie.default.tags" . }}'
If you want to use email functionality of email_config
then simply use the same approach with:
[ auth_password_file: <string> | default = global.smtp_auth_password_file ]

Related

Ansible loop to create docker networks

Suppose I have my docker networks defined in a variable such as:
docker_networks:
- name: default
driver: bridge
- name: proxy
driver: bridge
ipam_options:
subnet: '192.168.100.0/24'
- name: socket_proxy
driver: bridge
ipam_options:
subnet: '192.168.101.0/24'
How would I go about running this with a loop to create these docker networks?
I tried the following, however the ipam_config parameter causes it to fail if no subnet is defined:
- name: Create networks
docker_network:
name: '{{ item.name }}'
driver: '{{ item.driver | default(omit) }}'
ipam_config:
- subnet: '{{ item.ipam_options.subnet | default(omit) }}'
loop: '{{ docker_networks }}'
If you modify your docker_networks variable so that the value of the ipam_options key is a list of dictionaries:
docker_networks:
- name: proxy
driver: bridge
ipam_options:
- subnet: '192.168.100.0/24'
- name: socket_proxy
driver: bridge
ipam_options:
- subnet: '192.168.101.0/24'
- name: no_subnet
driver: bridge
Then you can rewrite your task like this:
- name: Create networks
community.docker.docker_network:
name: '{{ item.name }}'
driver: '{{ item.driver | default(omit) }}'
ipam_config: "{{ item.ipam_options | default(omit) }}"
loop: '{{ docker_networks }}'
(I would also just rename the ipam_options key to ipam_config, so
that it matches the parameter name.)

How to use environment/secret variable in Helm?

In my helm chart, I have a few files that need credentials to be inputted
For example
<Resource
name="jdbc/test"
auth="Container"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
url="jdbc:sqlserver://{{ .Values.DB.host }}:{{ .Values.DB.port }};selectMethod=direct;DatabaseName={{ .Values.DB.name }};User={{ Values.DB.username }};Password={{ .Values.DB.password }}"
/>
I created a secret
Name: databaseinfo
Data:
username
password
I then create environment variables to retrieve those secrets in my deployment.yaml:
env:
- name: DBPassword
valueFrom:
secretKeyRef:
key: password
name: databaseinfo
- name: DBUser
valueFrom:
secretKeyRef:
key: username
name: databaseinfo
In my values.yaml or this other file, I need to be able to reference to this secret/environment variable. I tried the following but it does not work:
values.yaml
DB:
username: $env.DBUser
password: $env.DBPassword
you can't pass variables from any template to values.yaml with helm. Just from values.yaml to the templates.
The answer you are seeking was posted by mehowthe :
deployment.yaml =
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
values.yaml =
env:
- name: "DBUser"
value: ""
- name: "DBPassword"
value: ""
then
helm install chart_name --name release_name --set env.DBUser="FOO" --set env.DBPassword="BAR"

Azure yaml pipeline "Expected mapping end"

I want to define a deployment job (via a template) but, running my azure pipeline, the following errors is displayed:
job-deploy.yml#templates: Expected mapping end
Where is my issue ?
Following the template called:
parameters:
- name: clientBaseName
type: string
- name: environment
type: string
- name: aks
type: string
- name: helm
type: string
default: 'helm3'
values:
- 'helm2'
- 'helm3'
jobs:
- deployment: deploy_{{ parameters.environment }}
displayName: 'Deploy a MyPlace client.'
environment: approvals-demo-core
strategy:
runOnce:
preDeploy:
steps:
- template: ../tasks/task-chart-setup.yml
parameters:
helm: ${{ parameters.helm }}
deploy:
steps:
- template: ../tasks/task-chart-deploy.yml
parameters:
type: data
namespace: ${{ parameters.clientBaseName }}-{{ parameters.environment }}
charts: ./charts/data
values: ./output/{{ parameters.environment }}/data.yaml
aks: {{ parameters.aks }}
- template: ../tasks/task-chart-deploy.yml
parameters:
type: services
namespace: ${{ parameters.clientBaseName }}-{{ parameters.environment }}
charts: ./charts/services
values: ./output/{{ parameters.environment }}/services.yaml
aks: {{ parameters.aks }}
- template: ../tasks/task-chart-deploy.yml
parameters:
type: jobs
namespace: ${{ parameters.clientBaseName }}-{{ parameters.environment }}
charts: ./charts/jobs
values: ./output/{{ parameters.environment }}/jobs.yaml
aks: {{ parameters.aks }}
Expected mapping end usually refers to the error in the yaml syntax format. "$" is missing from the reference variable in your yaml file.
You need to change {{ parameters.environment }} to ${{ parameters.environment }}
parameters:
- name: clientBaseName
type: string
- name: environment
type: string
- name: aks
type: string
- name: helm
type: string
default: 'helm3'
values:
- 'helm2'
- 'helm3'
jobs:
- deployment: deploy_${{ parameters.environment }}
displayName: 'Deploy a MyPlace client.'
# environment: approvals-demo-core
strategy:
runOnce:
preDeploy:
steps:
- template: ../tasks/task-chart-setup.yml
parameters:
helm: ${{ parameters.helm }}
deploy:
steps:
- template: ../tasks/task-chart-deploy.yml
parameters:
type: data
namespace: ${{ parameters.clientBaseName }}-${{ parameters.environment }}
charts: ./charts/data
values: ./output/{{ parameters.environment }}/data.yaml
aks: ${{ parameters.aks }}
- template: ../tasks/task-chart-deploy.yml
parameters:
type: services
namespace: ${{ parameters.clientBaseName }}-${{ parameters.environment }}
charts: ./charts/services
values: ./output/${{ parameters.environment }}/services.yaml
aks: ${{ parameters.aks }}
- template: ../tasks/task-chart-deploy.yml
parameters:
type: jobs
namespace: ${{ parameters.clientBaseName }}-${{ parameters.environment }}
charts: ./charts/jobs
values: ./output/${{ parameters.environment }}/jobs.yaml
aks: ${{ parameters.aks }}

Gcloud : Getting output from a template

I have created a template to deploy a compute instance content of template is given below:
resources:
- name: {{ properties["name"] }}
type: compute.v1.instance
properties:
zone: {{ properties["zone"] }}
machineType: https://www.googleapis.com/compute/v1/projects/{{ properties["project"] }}/zones/{{ properties["zone"] }}/machineTypes/{{ properties["machinetype"] }}
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage: {{ properties["sourceimage"] }}
networkInterfaces:
- network: https://www.googleapis.com/compute/v1/projects/{{ properties["project"] }}/global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
outputs:
- name: var1
value: 'testing'
- name: var2
value: 88
Deploying template using gcloud I am expecting the output in outputs field But, After Successful Deployment of template, I am getting outputs field blank as given below:
{
"outputs": [],
"resources": [
{
"finalProperties":.....
}
}
Please suggest if I am missing out something.
It looks weird/impossible to use {{ properties["name"] }} with properties["name"] as a variable.
I think you should create a parameter as it is shown here

Custom alert for pod memory utilisation in Prometheus

I created alert rules for pod memory utilisation, in Prometheus. Alerts are showing perfectly on my slack channel, but it do not contain the name of the pod so that difficult to understand which pod is having the issue .
It is Just showing [FIRING:35] (POD_MEMORY_HIGH_UTILIZATION default/k8s warning). But when I look in to the "Alert" section in the Prometheus UI, I can see the fired rules with its pod name. Can anyone help?
My alert notification template is as follows:
alertname: TargetDown
alertname: POD_CPU_HIGH_UTILIZATION
alertname: POD_MEMORY_HIGH_UTILIZATION
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#devops'
title: '{{ .CommonAnnotations.summary }}'
text: '{{ .CommonAnnotations.description }}'
send_resolved: true
I have added the option title: '{{ .CommonAnnotations.summary }}' text: '{{ .CommonAnnotations.description }}' in my alert notification template and now it is showing the description. My description is description: pod {{$labels.pod}} is using high memory. But only showing is using high memory. Not specifying the pod name
As mentioned in the article, you should check the alert rules and update them if necessary. See an example:
ALERT ElasticacheCPUUtilisation
IF aws_elasticache_cpuutilization_average > 80
FOR 10m
LABELS { severity = "warning" }
ANNOTATIONS {
summary = "ElastiCache CPU Utilisation Alert",
description = "Elasticache CPU Usage has breach the threshold set (80%) on cluster id {{ $labels.cache_cluster_id }}, now at {{ $value }}%",
runbook = "https://mywiki.com/ElasticacheCPUUtilisation",
}
To provide external URL for your prometheus GUI, apply CLI argument to your prometheus server and restart it:
-web.external-url=http://externally-available-url:9090/
After that, you can put the values into your alertmanager configuration. See an example:
receivers:
- name: 'iw-team-slack'
slack_configs:
- channel: alert-events
send_resolved: true
api_url: https://hooks.slack.com/services/<your_token>
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}