redis_exporter with Google Memorystore for Redis - kubernetes

I have a prometheus instance installed on my gke cluster (operator-prometheus), I would like to scrape metrics from my gcp managed redis instance.
I have the following deployment definition for redis_exporter:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-exporter
spec:
selector:
matchLabels:
app: redis-exporter
replicas: 1
template:
metadata:
labels:
app: redis-exporter
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
ports:
- containerPort: 9121
env:
- name: REDIS_ADDR
value: 'redis://10.xxx.151.xxx:6379'
resources:
limits:
memory: "256Mi"
cpu: "256m"
My service definition:
apiVersion: v1
kind: Service
metadata:
name: redis-external-exporter
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
And finally, my ServiceMonitor definition
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
labels:
app: redis-external-exporter
prometheus-instance: clusterwide
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
endpoints:
- interval: 30s
port: redis-ext
honorLabels: true
path: /metrics
namespaceSelector:
matchNames:
- monitoring
Am I missing something? If I browse to port 9121 I'm able to see the metrics, it seems its just a problem of connect all the pieces together.
Kindly asking for your help!

Related

I am not able to access react-flask api on localhost using kubernetes

Flask deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-api
spec:
replicas: 1
selector:
matchLabels:
app: flask-api
template:
metadata:
labels:
app: flask-api
spec:
containers:
- name: flask-api-container
image: umarrafaqat/flask-app:latest
imagePullPolicy: Always
ports:
- containerPort: 5000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
type: ClusterIP
ports:
- port: 5000
selector:
app: flask-api
React deployments
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app
spec:
replicas: 1
selector:
matchLabels:
app: react-app
template:
metadata:
labels:
app: react-app
spec:
containers:
- name: react-app-container
image: umarrafaqat/react-app:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: react-app-service
spec:
ports:
- port: 3000
selector:
app: react-app
Ingresss
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: react-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- host: localhost
http:
paths:
- backend:
service:
name: react-app-service
port:
number: 3000
path: /
pathType: Prefix
I want to access this app on a local host but cannot do so. I am running it on minikube

Pods in differents Nodes can't talk each other - Kubernetes

Context:
I am building an application and now I am on the infrastructure step.
The application is built with Java and persistence layer is MongoDB.
Problem:
If the application is running in same Node as persistence Layer are, everything goes ok, but on different nodes the application cannot communicate with MongoDB.
There is a print of Kubernetes Dashboard:
As you can see, two pods of application (gateway) are running in same node as Mongo, but other two don't. These two are not finding MongoDb.
Here is the mongo-db.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /home/vitor/seguranca/mongo
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
volumeName: mongo-data
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo-service
spec:
volumes:
- name: "deployment-storage"
persistentVolumeClaim:
claimName: "pvc"
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: "deployment-storage"
mountPath: "/data/db"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
and here the application.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: gateway
name: gateway
spec:
replicas: 4
selector:
matchLabels:
app: gateway
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: gateway
spec:
containers:
- image: vitornilson1998/native-micro
name: native-micro
env:
- name: MONGO_CONNECTION_STRING
value: mongodb://mongo-service:27017 #HERE IS THE POINT THAT THE APPLICATION USES TO ACCESS MONGODB
- name: MONGO_DB
value: gateway
resources: {}
ports:
- containerPort: 8080
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: gateway-service
name: gateway-service
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: gateway
type: NodePort
status:
loadBalancer: {}
I can't see what is stopping application to reach MongoDB.
Should I do what?
I was using calico as CNI.
I removed calico and let kube-proxy take care of everything.
Now everything is working fine.

connect flask app to Prometheus in Kubernetes cluster

I'm new with Prometheus and I have simple flask app in Kubernetes cluster also I have Prometheus-Monitoring-Grafana services in cluster too in namespace calles prometheus-monitoring. But the problem is when I create ServiceMonitor via .yaml file to connect my app to monitor with Prometheus I see that targets is not added but in config i see that job was added. But status in Prometheus - Service Discovery is Dropped.
A have no idea why my service is not connect to serviceMonitor
serviceMonitor/default/monitoring-webapp/0 (0 / 2 active targets)
app.py
app = Flask(__name__)
metrics = PrometheusMetrics(app)
#app.route('/api')
def index():
return 'ok'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_one:latest
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
env:
- name: flask_url
value: http://flasktwo-service:5003
imagePullSecrets:
- name: dockersecret
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: service
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-webapp
labels:
release: prometheus-monitoring
app: webapp
spec:
endpoints:
- path: /metrics
port: service
targetPort: 5000
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app: webapp
Finally i figured it out. The issue was port name. Please find workable solution below
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
template:
metadata:
labels:
component: backend
instance: app
name: containers-my-app
spec:
containers:
- name: app
image: dmitriy83/flask_one:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: webapp
imagePullSecrets:
- name: myregistrykey
service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
type: ClusterIP
ports:
- name: http
port: 5000
protocol: TCP
targetPort: webapp # one of the major thing w/o it you could not have active targets in Prometheus
selector:
component: backend
instance: app
name: containers-my-app
finally monitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: webapp-super
labels:
component: backend
instance: app
name: containers-my-app
release: kube-prometheus-stack # You need to verify what is your realease name pf prometheus
namespace: prometheus-monitoring # choose in what name space your prometheus is
spec:
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
endpoints:
- port: http # http - is a port name which was put in service.yaml

using prometheus pod to monitor a golang webapp pod

I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.
I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'

Connect redis exporter and prometheus operator

I have a Redis cluster and Redis-exporter in two separate deployments in the same namespace of a Kubernetes cluster. I am using Prometheus operator to monitor the cluster, but I can not find a way to set up the exporter and the operator. I have set up a service targeting the Redis exporter(check below) and a ServiceMonitor(also below). If I port forward to the Redis exporter service I can see the metrics. Also, the Redis exporter logs do not show issues.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: foo
name: redis-exporter
labels:
app: redis-exporter
spec:
replicas: 1
selector:
matchLabels:
app: redis-exporter
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9121"
labels:
app: redis-exporter
spec:
containers:
- name: redis-exporter
image: oliver006/redis_exporter:latest
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: REDIS_ADDR
value: redis-cluster.foo.svc:6379
ports:
- containerPort: 9121
My service and servicemonitor
kind: Service
metadata:
name: redis-external-exporter
namespace: foo
labels:
app: redis
k8s-app: redis-ext
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9121"
spec:
ports:
- name: redis-ext
port: 9121
protocol: TCP
targetPort: 9121
selector:
app: redis-exporter
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-external-exporter
namespace: bi-infra
labels:
app: redis-external-exporter
k8s-app: redis-monitor
spec:
jobLabel: app
selector:
matchLabels:
app: redis-ext
namespaceSelector:
matchNames:
- foo
endpoints:
- port: redis-ext
interval: 30s
honorLabels: true
If I switch to a sidecar Redis exporter next to the Redis cluster all is working properly. Has anyone faced such issue?
I was missing spec.endpoints.path on the ServiceMonitor
Here is an example manifest from adding new scraping targets and troubleshooting tutorial.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-pili
namespace: monitoring
labels:
app: pili-service-monitor
spec:
selector:
matchLabels:
# Target app service
app: pili
endpoints:
- interval: 15s
path: /metrics <---
port: uwsgi
namespaceSelector:
matchNames:
- default