using prometheus pod to monitor a golang webapp pod - kubernetes

I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.

I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'

Related

Pods in differents Nodes can't talk each other - Kubernetes

Context:
I am building an application and now I am on the infrastructure step.
The application is built with Java and persistence layer is MongoDB.
Problem:
If the application is running in same Node as persistence Layer are, everything goes ok, but on different nodes the application cannot communicate with MongoDB.
There is a print of Kubernetes Dashboard:
As you can see, two pods of application (gateway) are running in same node as Mongo, but other two don't. These two are not finding MongoDb.
Here is the mongo-db.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /home/vitor/seguranca/mongo
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
volumeName: mongo-data
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo-service
spec:
volumes:
- name: "deployment-storage"
persistentVolumeClaim:
claimName: "pvc"
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: "deployment-storage"
mountPath: "/data/db"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
and here the application.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: gateway
name: gateway
spec:
replicas: 4
selector:
matchLabels:
app: gateway
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: gateway
spec:
containers:
- image: vitornilson1998/native-micro
name: native-micro
env:
- name: MONGO_CONNECTION_STRING
value: mongodb://mongo-service:27017 #HERE IS THE POINT THAT THE APPLICATION USES TO ACCESS MONGODB
- name: MONGO_DB
value: gateway
resources: {}
ports:
- containerPort: 8080
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: gateway-service
name: gateway-service
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: gateway
type: NodePort
status:
loadBalancer: {}
I can't see what is stopping application to reach MongoDB.
Should I do what?
I was using calico as CNI.
I removed calico and let kube-proxy take care of everything.
Now everything is working fine.

redis_exporter with Google Memorystore for Redis

I have a prometheus instance installed on my gke cluster (operator-prometheus), I would like to scrape metrics from my gcp managed redis instance.
I have the following deployment definition for redis_exporter:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-exporter
spec:
selector:
matchLabels:
app: redis-exporter
replicas: 1
template:
metadata:
labels:
app: redis-exporter
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
ports:
- containerPort: 9121
env:
- name: REDIS_ADDR
value: 'redis://10.xxx.151.xxx:6379'
resources:
limits:
memory: "256Mi"
cpu: "256m"
My service definition:
apiVersion: v1
kind: Service
metadata:
name: redis-external-exporter
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
And finally, my ServiceMonitor definition
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
labels:
app: redis-external-exporter
prometheus-instance: clusterwide
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
endpoints:
- interval: 30s
port: redis-ext
honorLabels: true
path: /metrics
namespaceSelector:
matchNames:
- monitoring
Am I missing something? If I browse to port 9121 I'm able to see the metrics, it seems its just a problem of connect all the pieces together.
Kindly asking for your help!

Restart pod when another service is recreated

I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003

Connect redis exporter and prometheus operator

I have a Redis cluster and Redis-exporter in two separate deployments in the same namespace of a Kubernetes cluster. I am using Prometheus operator to monitor the cluster, but I can not find a way to set up the exporter and the operator. I have set up a service targeting the Redis exporter(check below) and a ServiceMonitor(also below). If I port forward to the Redis exporter service I can see the metrics. Also, the Redis exporter logs do not show issues.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: foo
name: redis-exporter
labels:
app: redis-exporter
spec:
replicas: 1
selector:
matchLabels:
app: redis-exporter
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9121"
labels:
app: redis-exporter
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: REDIS_ADDR
value: redis-cluster.foo.svc:6379
ports:
- containerPort: 9121
My service and servicemonitor
kind: Service
metadata:
name: redis-external-exporter
namespace: foo
labels:
app: redis
k8s-app: redis-ext
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
namespace: bi-infra
labels:
app: redis-external-exporter
k8s-app: redis-monitor
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
namespaceSelector:
matchNames:
- foo
endpoints:
- port: redis-ext
interval: 30s
honorLabels: true
If I switch to a sidecar Redis exporter next to the Redis cluster all is working properly. Has anyone faced such issue?
I was missing spec.endpoints.path on the ServiceMonitor
Here is an example manifest from adding new scraping targets and troubleshooting tutorial.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-pili
namespace: monitoring
labels:
app: pili-service-monitor
spec:
selector:
matchLabels:
# Target app service
app: pili
endpoints:
- interval: 15s
path: /metrics <---
port: uwsgi
namespaceSelector:
matchNames:
- default

Setting up LetEncrypt HTTPS Traefik Ingress for Kubernetes Cluster

I've setup Kubernetes to use the Traefik Ingress to provide name based routing. I am a little lost in terms of how to configure for the automatic LetsEncrypt SSL certs. How do I reference the TOML files and configure for HTTPs. I am using a simple container below with the NGINX image to test this.
The below is my YAML for the deployment/service/ingress.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: hmweb
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nginx:latest
envFrom:
- configMapRef:
name: config
ports:
- containerPort: 80
I have also included my ingress.yaml
--
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
You could build a custom image and include the toml file that way, however that would NOT be best practice. Here's how I did it:
1) Deploy your toml configuration to kubernetes as a ConfigMap like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-traefik
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "you#email.com"
storage = "/storage/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.tlsChallenge]
2) Connect the configuration to your Traefik deployment. Here's my configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dpl-traefik
labels:
k8s-app: traefik
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik
template:
metadata:
labels:
k8s-app: traefik
name: traefik
spec:
serviceAccountName: svc-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: cfg-traefik
- name: cert-storage
persistentVolumeClaim:
claimName: pvc-traefik
containers:
- image: traefik:alpine
name: traefik
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/storage"
name: cert-storage
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/config/traefik.toml