I am trying to expose some custom metrics of a kubernetes application at prometheus.
I successfully deploy deploy my app at kubernetes. The ServiceMonitor is also added but no targets are discovered (0/0 up) .
The application is an nginx server with the relevant nginx-prometheus-exporter sidecar.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-example-v3
labels:
app: nginx-example-v3
spec:
selector:
matchLabels:
app: nginx-example-v3
template:
metadata:
labels:
app: nginx-example-v3
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "100m"
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: "config"
mountPath: "/etc/nginx/nginx.conf"
subPath: "nginx.conf"
- name: exporter
image: nginx/nginx-prometheus-exporter:0.8.0
ports:
- containerPort: 9113
volumes:
- name: "config"
configMap:
name: "nginx-example-v2-config"
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-example-v3
name: nginx-example-v3
spec:
type: LoadBalancer
selector:
app: nginx-example-v3
ports:
- name: http
port: 8080
targetPort: 8080
- name: http-exporter
port: 9113
targetPort: 9113
After that i can see the nginx custom metrics exposed at the /metrics API:
Then i apply the monitoring service:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: nginx-example-v3
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 15s
port: web
selector:
matchLabels:
app: nginx-example-v3
i can see that the service is successfully added at prometheus "Service Discovery" section:
BUT no targets are discovered (0/0 up) on behalf of prometheus
What i am missing???
Any help is really apreciated since i have been stuck on this many days!
Thank you very much in advance. :-)
Replies
#efotopoulou
I slightly changed the service manifest and i am able to get the metrics now.
sorry for the new discussion. i hope my manifests help other persons to configure their services.
This is the updated manifest.
:-)
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-example-v3
name: nginx-example-v3
spec:
type: LoadBalancer
selector:
app: nginx-example-v3
ports:
- name: http
port: 8080
targetPort: 8080
- name: web
port: 9113
targetPort: 9113
Related
I'm new with Prometheus and I have simple flask app in Kubernetes cluster also I have Prometheus-Monitoring-Grafana services in cluster too in namespace calles prometheus-monitoring. But the problem is when I create ServiceMonitor via .yaml file to connect my app to monitor with Prometheus I see that targets is not added but in config i see that job was added. But status in Prometheus - Service Discovery is Dropped.
A have no idea why my service is not connect to serviceMonitor
serviceMonitor/default/monitoring-webapp/0 (0 / 2 active targets)
app.py
app = Flask(__name__)
metrics = PrometheusMetrics(app)
#app.route('/api')
def index():
return 'ok'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_one:latest
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
env:
- name: flask_url
value: http://flasktwo-service:5003
imagePullSecrets:
- name: dockersecret
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: service
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-webapp
labels:
release: prometheus-monitoring
app: webapp
spec:
endpoints:
- path: /metrics
port: service
targetPort: 5000
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app: webapp
Finally i figured it out. The issue was port name. Please find workable solution below
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
template:
metadata:
labels:
component: backend
instance: app
name: containers-my-app
spec:
containers:
- name: app
image: dmitriy83/flask_one:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: webapp
imagePullSecrets:
- name: myregistrykey
service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
type: ClusterIP
ports:
- name: http
port: 5000
protocol: TCP
targetPort: webapp # one of the major thing w/o it you could not have active targets in Prometheus
selector:
component: backend
instance: app
name: containers-my-app
finally monitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: webapp-super
labels:
component: backend
instance: app
name: containers-my-app
release: kube-prometheus-stack # You need to verify what is your realease name pf prometheus
namespace: prometheus-monitoring # choose in what name space your prometheus is
spec:
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
endpoints:
- port: http # http - is a port name which was put in service.yaml
I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.
I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
I've set up the following Ingress and deployment for Traefik on Kubernetes. I keep getting a bad gateway error on the actual domain name.
For some reason the service isn't working or I have got the connections wrong or something is amiss with the selectors etc.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: wordpress
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: wordpress
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
# traefik.ingress.kubernetes.io/frontend-entry-points: http,https
spec:
rules:
- host: test.example.services
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
My code is below so if there are any corrections to be made, advice is appreciated.
There are few things to consider:
I see that you are missing the namsespace: in your metadata:. Check if that is the case.
Try to create two services. One for wordpress and one for treafik-ingress-lb.
You might have used too many spaces after ports:. Try something like this:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
Check if your labels are correctly configured. If you need more detials regarding them, try this documentation.
Please let me know if that helped.
I've been learning about how to deploy ambassador on kubernetes on minikube by this tutorial, and that works as I can see the page for successfully installed ambassador. The main problem is, when I try to change the image of the UI such that it should open other app in the link, it opens the same successfull page of ambassador.
Previous tour.yaml
---
apiVersion: v1
kind: Service
metadata:
name: tour
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-ui_mapping
prefix: /
service: tour:5000
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-backend_mapping
prefix: /backend/
service: tour:8080
labels:
ambassador:
- request_label:
- backend
spec:
ports:
- name: ui
port: 5000
targetPort: 5000
- name: backend
port: 8080
targetPort: 8080
selector:
app: tour
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tour
spec:
replicas: 1
selector:
matchLabels:
app: tour
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: tour
spec:
containers:
- name: tour-ui
image: quay.io/datawire/tour:ui-0.2.1
ports:
- name: http
containerPort: 5000
- name: quote
image: quay.io/datawire/tour:backend-0.2.1
ports:
- name: http
containerPort: 8080
resources:
limits:
cpu: "0.1"
memory: 100Mi
modified tour.yaml(removed backend and changed the image)
---
apiVersion: v1
kind: Service
metadata:
name: tour
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-ui_mapping
prefix: /
service: tour:5000
spec:
ports:
- name: ui
port: 5000
targetPort: 5000
selector:
app: tour
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tour
spec:
replicas: 1
selector:
matchLabels:
app: tour
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: tour
spec:
containers:
- name: tour-ui
image: quay.io/integreatly/tutorial-web-app:2.10.5
ports:
- name: http
containerPort: 5000
resources:
limits:
cpu: "0.1"
memory: 100Mi
ambassador-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: NodePort
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
service: ambassador
Please help, I'm really confused what is the cause behind it and how I can resolve it.
What you're doing above is replacing the tour Kubernetes service and deployment with your own alternative. This is a bit of an unusual pattern; I'd suspect that there's probably a typo somewhere which means Kubernetes isn't picking up on your change.
I'd suggest creating a unique test Kubernetes service and deployment, and pointing the image in your deployment to your new image. Then you can register a new prefix with Ambassador.
You can also look at the Ambassador diagnostics (see https://www.getambassador.io/reference/diagnostics/) which will tell you which routes are registered with Ambassador.
I have a Kubernetes cluster with a backend service and a security service.
The ingress is defined as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solidary-life
annotations:
kubernetes.io/ingress.global-static-ip-name: sl-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: sl
spec:
rules:
- host: app-solidair-vlaanderen.com
http:
paths:
- path: /v0.0.1/*
backend:
serviceName: backend-backend
servicePort: 8080
- path: /auth/*
backend:
serviceName: security-backend
servicePort: 8080
tls:
- secretName: solidary-life-tls
hosts:
- app-solidair-vlaanderen.com
The backend service is configured like:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: sl
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: backend-app
image: gcr.io/solidary-life-218713/sv-backend:0.0.6
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /v0.0.1/api/online
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
and the auth server service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: security
labels:
app: sl-security
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: security-app
image: gcr.io/solidary-life-218713/sv-security:0.0.1
ports:
- name: http
containerPort: 8080
- name: management
containerPort: 9090
- name: jgroups-tcp
containerPort: 7600
- name: jgroups-tcp-fd
containerPort: 57600
- name: jgroups-udp
containerPort: 55200
protocol: UDP
- name: jgroups-udp-mc
containerPort: 45688
protocol: UDP
- name: jgroups-udp-fd
containerPort: 54200
protocol: UDP
- name: modcluster
containerPort: 23364
- name: modcluster-udp
containerPort: 23365
protocol: UDP
- name: txn-recovery-ev
containerPort: 4712
- name: txn-status-mgr
containerPort: 4713
readinessProbe:
httpGet:
path: /auth/
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: security-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
Now I can go to the url's:
https://app-solidair-vlaanderen.com/v0.0.1/api/online
https://app-solidair-vlaanderen.com/auth/
Sometimes this works, sometimes I get 404's. This is quite annoying and I am quite new to Kubernetes. I don't find the error.
Can it have something to do with the "sl" label that's on both the backend and security service definition?
Yes. At least that must be the start of the issue, assuming all your services are on the same Kubernetes namespace. Can you use a different label for each?
So, in essence, you have 2 services that are randomly selecting pods belonging to the security Deployment and the backend deployment. One way to determine where your service is really sending requests is by looking at its endpoints and running:
kubectl -n <your-namespace> <get or describe> ep