connect flask app to Prometheus in Kubernetes cluster - kubernetes

I'm new with Prometheus and I have simple flask app in Kubernetes cluster also I have Prometheus-Monitoring-Grafana services in cluster too in namespace calles prometheus-monitoring. But the problem is when I create ServiceMonitor via .yaml file to connect my app to monitor with Prometheus I see that targets is not added but in config i see that job was added. But status in Prometheus - Service Discovery is Dropped.
A have no idea why my service is not connect to serviceMonitor
serviceMonitor/default/monitoring-webapp/0 (0 / 2 active targets)
app.py
app = Flask(__name__)
metrics = PrometheusMetrics(app)
#app.route('/api')
def index():
return 'ok'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_one:latest
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
env:
- name: flask_url
value: http://flasktwo-service:5003
imagePullSecrets:
- name: dockersecret
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: service
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-webapp
labels:
release: prometheus-monitoring
app: webapp
spec:
endpoints:
- path: /metrics
port: service
targetPort: 5000
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app: webapp

Finally i figured it out. The issue was port name. Please find workable solution below
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
template:
metadata:
labels:
component: backend
instance: app
name: containers-my-app
spec:
containers:
- name: app
image: dmitriy83/flask_one:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: webapp
imagePullSecrets:
- name: myregistrykey
service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
type: ClusterIP
ports:
- name: http
port: 5000
protocol: TCP
targetPort: webapp # one of the major thing w/o it you could not have active targets in Prometheus
selector:
component: backend
instance: app
name: containers-my-app
finally monitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: webapp-super
labels:
component: backend
instance: app
name: containers-my-app
release: kube-prometheus-stack # You need to verify what is your realease name pf prometheus
namespace: prometheus-monitoring # choose in what name space your prometheus is
spec:
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
endpoints:
- port: http # http - is a port name which was put in service.yaml

Related

redis_exporter with Google Memorystore for Redis

I have a prometheus instance installed on my gke cluster (operator-prometheus), I would like to scrape metrics from my gcp managed redis instance.
I have the following deployment definition for redis_exporter:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-exporter
spec:
selector:
matchLabels:
app: redis-exporter
replicas: 1
template:
metadata:
labels:
app: redis-exporter
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
ports:
- containerPort: 9121
env:
- name: REDIS_ADDR
value: 'redis://10.xxx.151.xxx:6379'
resources:
limits:
memory: "256Mi"
cpu: "256m"
My service definition:
apiVersion: v1
kind: Service
metadata:
name: redis-external-exporter
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
And finally, my ServiceMonitor definition
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
labels:
app: redis-external-exporter
prometheus-instance: clusterwide
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
endpoints:
- interval: 30s
port: redis-ext
honorLabels: true
path: /metrics
namespaceSelector:
matchNames:
- monitoring
Am I missing something? If I browse to port 9121 I'm able to see the metrics, it seems its just a problem of connect all the pieces together.
Kindly asking for your help!

visual studio kubernetes project 503 error in azure

I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller.
After that I have published it to my ARC.
I used this command to create the AKS:
az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR
And I enabled HTTP application routing via the azure portal.
I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
service.yaml:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes1
spec:
type: ClusterIP
selector:
app: kubernetes1
ports:
- port: 80 # SERVICE exposed port
name: http # SERVICE port name
protocol: TCP # The protocol the SERVICE will listen to
targetPort: http # Port to forward to in the POD
ingress.yaml:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes1
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: kubernetes1.<uuid (removed for this post)>.westeurope.aksapp.io # Which host is allowed to enter the cluster
http:
paths:
- backend: # How the ingress will handle the requests
service:
name: kubernetes1 # Which service the request will be forwarded to
port:
name: http # Which port in that service
path: / # Which path is this rule referring to
pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:
503 Service Temporarily Unavailable
nginx/1.15.3
It's working now. For other people who have the same problem. I have updated my deployment config from:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
to:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: kubernetes1 # Labels follow the `name: value` template
template: # This is the template of the pod inside the deployment
metadata:
labels:
app: kubernetes1
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mycontainername.azurecr.io/kubernetes1:latest
name: kubernetes1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: http
I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.

using prometheus pod to monitor a golang webapp pod

I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.
I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'

kubernetes ServiceMonitor added but no targets discovered (0/0 up)

I am trying to expose some custom metrics of a kubernetes application at prometheus.
I successfully deploy deploy my app at kubernetes. The ServiceMonitor is also added but no targets are discovered (0/0 up) .
The application is an nginx server with the relevant nginx-prometheus-exporter sidecar.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-example-v3
labels:
app: nginx-example-v3
spec:
selector:
matchLabels:
app: nginx-example-v3
template:
metadata:
labels:
app: nginx-example-v3
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "100m"
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: "config"
mountPath: "/etc/nginx/nginx.conf"
subPath: "nginx.conf"
- name: exporter
image: nginx/nginx-prometheus-exporter:0.8.0
ports:
- containerPort: 9113
volumes:
- name: "config"
configMap:
name: "nginx-example-v2-config"
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-example-v3
name: nginx-example-v3
spec:
type: LoadBalancer
selector:
app: nginx-example-v3
ports:
- name: http
port: 8080
targetPort: 8080
- name: http-exporter
port: 9113
targetPort: 9113
After that i can see the nginx custom metrics exposed at the /metrics API:
Then i apply the monitoring service:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: nginx-example-v3
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 15s
port: web
selector:
matchLabels:
app: nginx-example-v3
i can see that the service is successfully added at prometheus "Service Discovery" section:
BUT no targets are discovered (0/0 up) on behalf of prometheus
What i am missing???
Any help is really apreciated since i have been stuck on this many days!
Thank you very much in advance. :-)
Replies
#efotopoulou
I slightly changed the service manifest and i am able to get the metrics now.
sorry for the new discussion. i hope my manifests help other persons to configure their services.
This is the updated manifest.
:-)
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-example-v3
name: nginx-example-v3
spec:
type: LoadBalancer
selector:
app: nginx-example-v3
ports:
- name: http
port: 8080
targetPort: 8080
- name: web
port: 9113
targetPort: 9113

Deploying ambassador to kubernetes

I've been learning about how to deploy ambassador on kubernetes on minikube by this tutorial, and that works as I can see the page for successfully installed ambassador. The main problem is, when I try to change the image of the UI such that it should open other app in the link, it opens the same successfull page of ambassador.
Previous tour.yaml
---
apiVersion: v1
kind: Service
metadata:
name: tour
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-ui_mapping
prefix: /
service: tour:5000
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-backend_mapping
prefix: /backend/
service: tour:8080
labels:
ambassador:
- request_label:
- backend
spec:
ports:
- name: ui
port: 5000
targetPort: 5000
- name: backend
port: 8080
targetPort: 8080
selector:
app: tour
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tour
spec:
replicas: 1
selector:
matchLabels:
app: tour
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: tour
spec:
containers:
- name: tour-ui
image: quay.io/datawire/tour:ui-0.2.1
ports:
- name: http
containerPort: 5000
- name: quote
image: quay.io/datawire/tour:backend-0.2.1
ports:
- name: http
containerPort: 8080
resources:
limits:
cpu: "0.1"
memory: 100Mi
modified tour.yaml(removed backend and changed the image)
---
apiVersion: v1
kind: Service
metadata:
name: tour
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-ui_mapping
prefix: /
service: tour:5000
spec:
ports:
- name: ui
port: 5000
targetPort: 5000
selector:
app: tour
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tour
spec:
replicas: 1
selector:
matchLabels:
app: tour
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: tour
spec:
containers:
- name: tour-ui
image: quay.io/integreatly/tutorial-web-app:2.10.5
ports:
- name: http
containerPort: 5000
resources:
limits:
cpu: "0.1"
memory: 100Mi
ambassador-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: NodePort
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
service: ambassador
Please help, I'm really confused what is the cause behind it and how I can resolve it.
What you're doing above is replacing the tour Kubernetes service and deployment with your own alternative. This is a bit of an unusual pattern; I'd suspect that there's probably a typo somewhere which means Kubernetes isn't picking up on your change.
I'd suggest creating a unique test Kubernetes service and deployment, and pointing the image in your deployment to your new image. Then you can register a new prefix with Ambassador.
You can also look at the Ambassador diagnostics (see https://www.getambassador.io/reference/diagnostics/) which will tell you which routes are registered with Ambassador.