I configure all of the following configurations but the request_per_second does not appear when I type the command
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
In the node.js that should be monitored I installed prom-client, I tested the /metrics and it's working very well and the metric "resquest_count" is the object it returns
Here are the important parts of that node code
(...)
const counter = new client.Counter({
name: 'request_count',
help: 'The total number of processed requests'
});
(...)
router.get('/metrics', async (req, res) => {
res.set('Content-Type', client.register.contentType)
res.end(await client.register.metrics())
})
This is my service monitor configuration
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: un1qnx-validation-service-monitor-node
namespace: default
labels:
app: node-request-persistence
release: prometheus
spec:
selector:
matchLabels:
app: node-request-persistence
endpoints:
- interval: 5s
path: /metrics
port: "80"
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
namespaceSelector:
matchNames:
- un1qnx-aks-development
This the node-request-persistence configuration
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: node-request-persistence
namespace: un1qnx-aks-development
name: node-request-persistence
spec:
selector:
matchLabels:
app: node-request-persistence
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
labels:
app: node-request-persistence
spec:
containers:
- name: node-request-persistence
image: node-request-persistence
imagePullPolicy: Always # IfNotPresent
resources:
requests:
memory: "200Mi" # Gi
cpu: "100m"
limits:
memory: "400Mi"
cpu: "500m"
ports:
- name: node-port
containerPort: 80
This is the prometheus adapter
prometheus:
url: http://prometheus-server.default.svc.cluster.local
port: 9090
rules:
custom:
- seriesQuery: 'request_count{namespace!="", pod!=""}'
resources:
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
name:
as: "request_per_second"
metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))"
This is the hpa
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: un1qnx-validation-service-hpa-angle
namespace: un1qnx-aks-development
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: un1qnx-validation-service-angle
metrics:
- type: Pods
pods:
metric:
name: request_per_second
target:
type: AverageValue
averageValue: "5"
The command
kubectl get hpa -n un1qnx-aks-development
results in "unknown/5"
Also, the command
kubectl get --raw "http://prometheus-server.default.svc.cluster.local:9090/api/v1/series"
Results in
Error from server (NotFound): the server could not find the requested resource
I think it should return some value about the collected metrics... I think that the problem is from the service monitor, but I am new to this
As you noticed I am trying to scale a deployment based on another deployment pods, don't know if there is a problem there
I appreciate an answer, because this is for my thesis
kubernetes - version 1.19.9
Prometheus - chart prometheus-14.2.1 app version 2.26.0
Prometheus Adapter - chart 2.14.2 app version 0.8.4
And all where installed using helm
After some time I found the problems and I changed the following
Changed the port on the prometheus adapter, the time on the query and the names of the resource override. But to know the names of the resources override you need to port forward to the prometheus server and check the labels on the targets page of the app that you are monitoring.
prometheus:
url: http://prometheus-server.default.svc.cluster.local
port: 80
rules:
custom:
- seriesQuery: 'request_count{kubernetes_namespace!="", kubernetes_pod_name!=""}'
resources:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
name:
matches: "request_count"
as: "request_count"
metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>))"
I also added annotations on the deployment yaml
spec:
selector:
matchLabels:
app: node-request-persistence
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
labels:
Related
I have deployed kube-state-metrics into kube-system namespace and in the same cluster we are having prometheus-operator running I've written the below service monitor file for sending metrics to prometheus but it is not working. Please find the files below.
Servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kube-state-metrics
labels:
app.kubernetes.io/name: kube-state-metrics
namespace: kube-system
spec:
selector:
matchLabels:
prometheus-scrape: "true"
endpoints:
- port: metrics
path: /metrics
targetPort: 8080
honorLabels: true
scheme: https
tlsConfig:
insecureSkipVerify: true
Prometheus-deploy.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
argocd.argoproj.io/sync-wave: "1"
name: prometheus
labels:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector:
matchLabels:
prometheus-scrape: "true"
podMonitorSelector: {}
podMonitorNamespaceSelector:
matchLabels:
prometheus-scrape: "true"
resources:
requests:
memory: 400Mi
enableAdminAPI: false
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
Can any one please help me out regarding this issue.
Thanks.
ServiceMonitor's selector>matchLabels should match with "Service"'s labels. Check if your service has correct label.
I have a kubernetes deployment file user.yaml -
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
namespace: stage
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9022'
spec:
nodeSelector:
env: stage
containers:
- name: user
image: <docker image path>
imagePullPolicy: Always
resources:
limits:
memory: "512Mi"
cpu: "250m"
requests:
memory: "256Mi"
cpu: "200m"
ports:
- containerPort: 8080
env:
- name: MODE
value: "local"
- name: PORT
value: ":8080"
- name: REDIS_HOST
value: "xxx"
- name: KAFKA_ENABLED
value: "true"
- name: BROKERS
value: "xxx"
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
namespace: stage
name: user
spec:
selector:
app: user
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: user
namespace: stage
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 300Mi
This deployment is already running with linkerd injected with command cat user.yaml | linkerd inject - | kubectl apply -f -
Now I wanted to add linkerd inject annotation (as mentioned here) and use command kubectl apply -f user.yaml just like I use for a deployment without linkerd injected.
However, with modified user.yaml (after adding linkerd.io/inject annotation in deployment) -
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
namespace: stage
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9022'
linkerd.io/inject: enabled
spec:
nodeSelector:
env: stage
containers:
- name: user
image: <docker image path>
imagePullPolicy: Always
resources:
limits:
memory: "512Mi"
cpu: "250m"
requests:
memory: "256Mi"
cpu: "200m"
ports:
- containerPort: 8080
env:
- name: MODE
value: "local"
- name: PORT
value: ":8080"
- name: REDIS_HOST
value: "xxx"
- name: KAFKA_ENABLED
value: "true"
- name: BROKERS
value: "xxx"
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
namespace: stage
name: user
spec:
selector:
app: user
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: user
namespace: stage
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 300Mi
When I run kubectl apply -f user.yaml, it throws error -
service/user unchanged
horizontalpodautoscaler.autoscaling/user configured
Error from server (BadRequest): error when creating "user.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.v1.EnvVar.Value: ReadString: expects " or n, but found 1, error found in #10 byte of ...|,"value":1},{"name":|..., bigger context ...|ue":":8080"}
Can anyone please point out where I have gone wrong in adding annotation?
Thanks
Try with double quotes like below
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9022"
linkerd.io/inject: enabled
I have a prometheus instance installed on my gke cluster (operator-prometheus), I would like to scrape metrics from my gcp managed redis instance.
I have the following deployment definition for redis_exporter:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-exporter
spec:
selector:
matchLabels:
app: redis-exporter
replicas: 1
template:
metadata:
labels:
app: redis-exporter
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
ports:
- containerPort: 9121
env:
- name: REDIS_ADDR
value: 'redis://10.xxx.151.xxx:6379'
resources:
limits:
memory: "256Mi"
cpu: "256m"
My service definition:
apiVersion: v1
kind: Service
metadata:
name: redis-external-exporter
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
And finally, my ServiceMonitor definition
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
labels:
app: redis-external-exporter
prometheus-instance: clusterwide
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
endpoints:
- interval: 30s
port: redis-ext
honorLabels: true
path: /metrics
namespaceSelector:
matchNames:
- monitoring
Am I missing something? If I browse to port 9121 I'm able to see the metrics, it seems its just a problem of connect all the pieces together.
Kindly asking for your help!
I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller.
After that I have published it to my ARC.
I used this command to create the AKS:
az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR
And I enabled HTTP application routing via the azure portal.
I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
service.yaml:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes1
spec:
type: ClusterIP
selector:
app: kubernetes1
ports:
- port: 80 # SERVICE exposed port
name: http # SERVICE port name
protocol: TCP # The protocol the SERVICE will listen to
targetPort: http # Port to forward to in the POD
ingress.yaml:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes1
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: kubernetes1.<uuid (removed for this post)>.westeurope.aksapp.io # Which host is allowed to enter the cluster
http:
paths:
- backend: # How the ingress will handle the requests
service:
name: kubernetes1 # Which service the request will be forwarded to
port:
name: http # Which port in that service
path: / # Which path is this rule referring to
pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:
503 Service Temporarily Unavailable
nginx/1.15.3
It's working now. For other people who have the same problem. I have updated my deployment config from:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
to:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: kubernetes1 # Labels follow the `name: value` template
template: # This is the template of the pod inside the deployment
metadata:
labels:
app: kubernetes1
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mycontainername.azurecr.io/kubernetes1:latest
name: kubernetes1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: http
I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.
I have a Redis cluster and Redis-exporter in two separate deployments in the same namespace of a Kubernetes cluster. I am using Prometheus operator to monitor the cluster, but I can not find a way to set up the exporter and the operator. I have set up a service targeting the Redis exporter(check below) and a ServiceMonitor(also below). If I port forward to the Redis exporter service I can see the metrics. Also, the Redis exporter logs do not show issues.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: foo
name: redis-exporter
labels:
app: redis-exporter
spec:
replicas: 1
selector:
matchLabels:
app: redis-exporter
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9121"
labels:
app: redis-exporter
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: REDIS_ADDR
value: redis-cluster.foo.svc:6379
ports:
- containerPort: 9121
My service and servicemonitor
kind: Service
metadata:
name: redis-external-exporter
namespace: foo
labels:
app: redis
k8s-app: redis-ext
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
namespace: bi-infra
labels:
app: redis-external-exporter
k8s-app: redis-monitor
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
namespaceSelector:
matchNames:
- foo
endpoints:
- port: redis-ext
interval: 30s
honorLabels: true
If I switch to a sidecar Redis exporter next to the Redis cluster all is working properly. Has anyone faced such issue?
I was missing spec.endpoints.path on the ServiceMonitor
Here is an example manifest from adding new scraping targets and troubleshooting tutorial.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-pili
namespace: monitoring
labels:
app: pili-service-monitor
spec:
selector:
matchLabels:
# Target app service
app: pili
endpoints:
- interval: 15s
path: /metrics <---
port: uwsgi
namespaceSelector:
matchNames:
- default