Remove default vars from custom K8s prometheus exporter - kubernetes

I'm starting playing with custom exporters(Using kubernetes,grafana and prometheus) and I have a problem. I managed to expose my metrics correctly but every time I kill the pod that is sending them, the vars change and grafana plots a different colour(like a new info).
Is there any way to only keep app as var, I think that the problem are the vars that change(pod name and ip)?
MyMetric{app="prometheus-export-mymetric",instance="172.26.32.69:3000",job="kubernetes-pods",kubernetes_namespace="default",kubernetes_pod_name="prometheus-export-mymetric-66694564b8-r4pqc",pod_template_hash="66694564b8"}
Thanks in advance.

instead of kubernetes_pod_name you shall use pod labels that stay the same after redeploying.
in prometheus config we are using something like this:
- job_name: kubernetes-pods
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
separator: ;
regex: "true"
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
separator: ;
regex: (.+)
target_label: __metrics_path__
replacement: $1
action: replace
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
separator: ;
regex: ([^:]+)(?::\d+)?;(\d+)
target_label: __address__
replacement: $1:$2
action: replace
- separator: ;
regex: __meta_kubernetes_pod_label_(.+)
replacement: $1
action: labelmap
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: kubernetes_namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: kubernetes_pod_name
replacement: $1
action: replace

Related

Unable to read custom logs in promtail

I am new to promtail and struggling to understand location from where it picks logs from each pod. I installed loki stack with promtail below are my default scrape_config
scrape_configs:
# See also https://github.com/grafana/loki/blob/master/production/ksonnet/promtail/scrape_config.libsonnet for reference
- job_name: kubernetes-pods
pipeline_stages:
- cri: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
action: replace
target_label: __tmp_controller_name
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app
- __tmp_controller_name
- __meta_kubernetes_pod_name
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: app
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_instance
- __meta_kubernetes_pod_label_release
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: instance
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_component
- __meta_kubernetes_pod_label_component
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: component
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: node_name
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
replacement: $1
separator: /
source_labels:
- namespace
- app
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- action: replace
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: replace
regex: true/(.*)
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
- __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
- __meta_kubernetes_pod_container_name
target_label: __path__
1- I want loki/promtail to pick some other log files but I am not sure on which location should I put these logs so that it is picked up by loki/promtails. I tried following paths but it did work
/app/logs
/var/log/containers
/var/log/pods
/var/log
/app
Look forward to replies. Thanks in advance

Connection refused on Prometheus metrics target deployed on Minikube with a custom SSL certificate

I am currently having issues trying to get Prometheus to scrape the metrics for my Minikube cluster. Prometheus is installed via the kube-prometheus-stack
kubectl create namespace monitoring && \
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && \
helm repo update && \
helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack
I am currently accessing Prometheus from an Ingress with a locally signed TLS certificate and it appears it's leading to conflicts as connection keeps getting refused by the cluster.
TLS is set up via Minikube ingress add-on:
kubectl create secret -n kube-system tls mkcert-tls-secret --cert=cert.pem --key=key.pem
minikube addons configure ingress <<< "kube-system/mkcert-tls-secret" && \
minikube addons disable ingress && \
minikube addons enable ingress
It seems Prometheus can't get access to http-metrics as a target. I installed Prometheus via a helm chart:
kubectl create namespace monitoring && \
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && \
helm repo update && \
helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack
Here is my Prometheus configuration:
global:
scrape_interval: 30s
scrape_timeout: 10s
evaluation_interval: 30s
external_labels:
prometheus: monitoring/prometheus-stack-kube-prom-prometheus
prometheus_replica: prometheus-prometheus-stack-kube-prom-prometheus-0
alerting:
alert_relabel_configs:
- separator: ;
regex: prometheus_replica
replacement: $1
action: labeldrop
alertmanagers:
- follow_redirects: true
enable_http2: true
scheme: http
path_prefix: /
timeout: 10s
api_version: v2
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: prometheus-stack-kube-prom-alertmanager
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: http-web
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
kubeconfig_file: ""
follow_redirects: true
enable_http2: true
namespaces:
own_namespace: false
names:
- monitoring
rule_files:
- /etc/prometheus/rules/prometheus-prometheus-stack-kube-prom-prometheus-rulefiles-0/*.yaml
scrape_configs:
- job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-controller-manager/0
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
scheme: https
authorization:
type: Bearer
credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
follow_redirects: true
enable_http2: true
relabel_configs:
- source_labels: [job]
separator: ;
regex: (.*)
target_label: __tmp_prometheus_job_name
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app]
separator: ;
regex: (kube-prometheus-stack-kube-controller-manager);true
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release]
separator: ;
regex: (prometheus-stack);true
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: http-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: job
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_service_label_jobLabel]
separator: ;
regex: (.+)
target_label: job
replacement: ${1}
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: http-metrics
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
kubeconfig_file: ""
follow_redirects: true
enable_http2: true
namespaces:
own_namespace: false
names:
- kube-system
- job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-etcd/0
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
authorization:
type: Bearer
credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
follow_redirects: true
enable_http2: true
relabel_configs:
- source_labels: [job]
separator: ;
regex: (.*)
target_label: __tmp_prometheus_job_name
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app]
separator: ;
regex: (kube-prometheus-stack-kube-etcd);true
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release]
separator: ;
regex: (prometheus-stack);true
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: http-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: job
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_service_label_jobLabel]
separator: ;
regex: (.+)
target_label: job
replacement: ${1}
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: http-metrics
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
kubeconfig_file: ""
follow_redirects: true
enable_http2: true
namespaces:
own_namespace: false
names:
- kube-system
I am also currently accessing (works just fine) the Prometheus instance outside of the cluster with an Ingress using the TLS certificate:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheusdashboard-ingress
namespace: monitoring
labels:
name: prometheusdashboard-ingress
spec:
tls:
- hosts:
- prometheus.demo
secretName: mkcert-tls-secret
rules:
- host: prometheus.demo
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: prometheus-stack-kube-prom-prometheus
port:
number: 9090
Here's the output in the target page of Prometheus:
What do I get the stack access to this TLS certificate which I assume is the main issue here?
Solution to the problem
After some thorough analysis of my various kubernetes clusters, I have found out the following errors exist:
Docker Desktop Environment throws this:
Warning Failed 6s (x3 over 30s) kubelet Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount
The solution tries solving it but it didn't work out for me.
Minikube Environment
The same error was replicated there too, I opened the web UI for minikube and found that these services were related to these ports
Through this image, we can understand that these services are relevant to these ports. You can try port-forwarding them but even that doesn't help much.
The only way I can see this working is to apply the configuration through prometheus chart.
After some more analysis
You can use the kubernetes-dashboard namespace using the following commmand to access the metrics.
After some debugging, these are the following images used by MiniKube:
kubernetesui/metrics-scraper
kubernetesui/dashboard:v2.3.1
K8S Provisioner
Okay now, what do I do with the Namespace?
You can access the namespace and check out the services.
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.98.229.151 <none> 8000/TCP 3m55s
kubernetes-dashboard ClusterIP 10.107.41.221 <none> 80/TCP 3m55s
Using the kubernetes-dashboard service, you can access metrics from the services. It goes to localhost:9090/metrics where you can get some more metrics like this:
EDIT
You can configure prometheus to access these metrics.
EDIT II
I am not a prometheus expert but I am of a certain opinion that the above mentioned config file isn't best suited for mining metrics. You should consider using a custom config file.

How to pass authentication parameters to one service in Prometheus Kubernetes service discovery?

I am using Kubernetes service discovery in Prometheus but for one of the services I need to pass an authentication block with username and password like:
basic_auth:
username: prometheus
password: prom123456
I understand that the authentication is per job so how can I combine both the discovery and the authentication for only one service of the discovery?
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
regex: (.+)
- regex: __meta_kubernetes_service_label_(.+)
action: labelmap
- regex: 'app_kubernetes_io_(.+)'
action: labeldrop
- regex: 'helm_sh_(.+)'
action: labeldrop

Prometheus configuration to scrape kube pod metrics

I have the below prometheus configuration to fetch the kubernetes pods metrics but however , in prometheus i could see only 2 metrics related to pods which Kube_pod_CPU, Kube_pod_memory but not any other metrics related to pods. Any help would be appreciated if there is any configuration issue.
- job_name: 'kubernetes-pods'
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
kubernetes_sd_configs:
- role: pod
namespaces:
names: ["namespace-name"]
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
separator: ":"
replacement: $1
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
separator: ":"
replacement: $1
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
separator: ":"
/prometheus-kubernetes.yml
regex: (.+):(?:\d+);(\d+)
#regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:${2}
target_label: __address__
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: prometheus.*
- action: labelmap
separator: ":"
replacement: $1
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
separator: ":"
replacement: $1
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
separator: ":"
replacement: $1
action: replace
target_label: kubernetes_pod_name
Thanks
> You have to add this three annotation to your deployment:
prometheus.io/scrape: 'true'
prometheus.io/path: 'metrics'
prometheus.io/port: '80'
How it will work?
Look at the kubernetes-pods job of config-map.yaml you are using to configure prometheus,
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name

Kubernetes pod and service not showing in prometheus target

I have deployed prometheus 2.0 on my multi node kubernetes cluster made from kubeadm. While accessing the prometheus dashboard i am not able to view pods and service job even after configuring it in prometheus configuration yaml file.
prometheus target are as follow https://i.stack.imgur.com/jiQPG.png .
Does this problem has anything to do with the prometheus version. I think i am going wrong with the syntax part of the configuration.
global:
scrape_interval: 5s
scrape_timeout: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: kubernetes-apiservers
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: https
kubernetes_sd_configs:
- api_server: null
role: endpoints
namespaces:
names: []
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: false
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
separator: ;
regex: default;kubernetes;https
replacement: $1
action: keep
- job_name: kubernetes-nodes
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: https
kubernetes_sd_configs:
- api_server: null
role: node
namespaces:
names: []
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: false
relabel_configs:
- separator: ;
regex: __meta_kubernetes_node_label_(.+)
replacement: $1
action: labelmap
- separator: ;
regex: (.*)
target_label: __address__
replacement: kubernetes.default.svc:443
action: replace
- source_labels: [__meta_kubernetes_node_name]
separator: ;
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
action: replace
- job_name: kubernetes-pods
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: https
kubernetes_sd_configs:
- api_server: null
role: pod
namespaces:
names: []
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: false
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
separator: ;
regex: "true"
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
separator: ;
regex: (.+)
target_label: __metrics_path__
replacement: $1
action: replace
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
separator: ;
regex: ([^:]+)(?::\d+)?;(\d+)
target_label: __address__
replacement: $1:$2
action: replace
- separator: ;
regex: __meta_kubernetes_pod_label_(.+)
replacement: $1
action: labelmap
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: kubernetes_namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: kubernetes_pod_name
replacement: $1
action: replace
- job_name: kubernetes-cadvisor
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: https
kubernetes_sd_configs:
- api_server: null
role: node
namespaces:
names: []
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: false
relabel_configs:
- separator: ;
regex: __meta_kubernetes_node_label_(.+)
replacement: $1
action: labelmap
- separator: ;
regex: (.*)
target_label: __address__
replacement: kubernetes.default.svc:443
action: replace
- source_labels: [__meta_kubernetes_node_name]
separator: ;
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
action: replace
- job_name: kubernetes-service-endpoints
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: https
kubernetes_sd_configs:
- api_server: null
role: endpoints
namespaces:
names: []
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
separator: ;
regex: "true"
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
separator: ;
regex: (https?)
target_label: __scheme__
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
separator: ;
regex: (.+)
target_label: __metrics_path__
replacement: $1
action: replace
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
separator: ;
regex: ([^:]+)(?::\d+)?;(\d+)
target_label: __address__
replacement: $1:$2
action: replace
- separator: ;
regex: __meta_kubernetes_service_label_(.+)
replacement: $1
action: labelmap
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: kubernetes_namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: kubernetes_name
replacement: $1
action: replace
Thanks
I assume you are doing the query you posted in the comments:
container_memory_usage_bytes{job="kubernetes-pods"}
This doesn't work, because you are filtering by the job name kubernetes-pods, but container_memory_usage_bytes comes from cAdvisor. So according to your config the job is named kubernetes-cadvisor.
Therefore this should work:
container_memory_usage_bytes{job="kubernetes-cadvisor"}
Since the series name is rather unique, you can just omit the job name:
container_memory_usage_bytes