i want to collect metrics from a deployment (with multiple pods) from Kubernetes, and on of my metrics is the number of calls that my deployment received, my question is about Prometheus, how can i tell Prometheus to call all the pods that are part of the deployment and collect metrics from them? And what is the best practice to achieve this goal?
I would highly recommend using prometheus-operator to do all heavy lifting with configuring Prometheus monitoring for your applications.
For example, having the Deployment and Service like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
You may configure ServiceMonitor object which will use Service as a service discovery endpoint to find all the pods of the Deployment. This assumes that your application is exposing metrics using HTTP path /metrics.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
This will make Prometheus scrape metrics for your application.
You may read more about ServiceMonitors here: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md
Related
I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local
I am new to kubernetes and trying to create a deployment. So first I created a replicaset named rs.yml shown below.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
and applied it using
kubectl apply -f rs.yml
now Instead rewriting all this in deployment. I just want to refer this 'rs.yml' file or service inside my deployment.yml file.
You can create single YAMl file and add both thing deployment service inside it.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
you can divide the things using the --- make the single YAML file.
also, another suggestion don't use the Replicasets by default Deployment create the Replicaset in the background.
You can use the kind: deployment can check kubectl get rc still replica set will be there. Deployment creates it in the background and manage it.
I need to hard code the address of a couchDB instance to another server in my kubernetes cluster. I'm not super familiar with kubernetes but I know that IP will change each time the cluster is rebuilt or the pod is rebuilt. So I can't use that.
What is the URL to this kubernetes service/what should I hard code into my Server Docker Image so it will alway find the CouchDB server in the system. I think it will be in this format
<service-name>.<namespace>.svc.cluster.local:<service-port>
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: orderer
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
If "wget 172.17.0.2:5984" works what should "172.17.0.2" be replaced with
The following is not correct
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.kino-couch.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.svc.cluster.local:5984
For StatefulSet you need to create a Headless service to be responsible for the network identity of the Pods proving stable DNS entries. Notice clusterIP: None in below example.
apiVersion: v1
kind: Service
metadata:
name: couch-service
labels:
app: kino-couch
spec:
ports:
- port: 5984
clusterIP: None
selector:
app: kino-couch
The statefulset need to refer to the above service in serviceName. So the statefulset yaml would look like below
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: couch-service
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
Then as a client you can access it using couch-service.<namespace>.svc.cluster.local:5984 to connect to a any of the CouchDB pods.
If you want to connect to a specific pod then use kino-couch-0.couch-service.<namespace>.svc.cluster.local:5984. This is typically needed for connecting the couchDB pods between themselves to create a cluster.
I am wondering what to specify in a separate deployment in order to have it access a DB deployment/service. Here is the DB deployment/service:
apiVersion: v1
kind: Service
metadata:
name: oracle-db
labels:
app: oracle-db
spec:
ports:
- name: oracle-db
port: 1521
protocol: TCP
targetPort: 1521
selector:
app: oracle-db
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oracle-db-depl
labels:
app: oracle-db
spec:
selector:
matchLabels:
app: oracle-db
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: oracle-db
spec:
containers:
- name: oracle-db
image: oracledb:latest
imagePullPolicy: Always
ports:
- containerPort: 1521
env:
...
How exactly do I specify the connection in the separate deployment? Do I specify the oracle-db service name somewhere? So far I specify a containerPort in the container.
If the other app deployment is in the same namespace you can refer to the oracle service by oracle-db. Here is an example of a word-press application using oracle.
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: oracle-db
ports:
- containerPort: 80
name: wordpress
As you can see oracle service is being referred by oracle-db as an environment variable.
If the service is in different namespace than the app deployment then you can refer to it as oracle-db.namespacename.svc.cluster.local
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
Services in Kubernetes are an "abstract way to expose an application running on a set of Pods as a network service." (k8s documentation)
You can access your pod by its IP and port that Kubernetes have given to it, but that's not a good practice as the Pods can die and another one will be created (if controlled by a Deployment/ReplicaSet). When the new one is created, a new IP will be used, and everything on your app will start to fail.
To solve this you can expose your Pod using a Service (as you already have done), and use service-name:service-port assigned to the Service to access your Pod. In this case, even if the Pod dies and a new one is created, Kubernetes will keep forwarding the traffic to the right Pod.
I have a single node k8s cluster. I have two namespaces, call them n1 and n2. I want to deploy the same image, on the same port but in different namespaces.
How do I do this?
namespace yamls:
apiVersion: v1
kind: Namespace
metadata:
name: n1
and
apiVersion: v1
kind: Namespace
metadata:
name: n2
service yamls:
apiVersion: v1
kind: Service
metadata:
name: my-app-n1
namespace: n1
labels:
app: my-app-n1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n1
and
apiVersion: v1
kind: Service
metadata:
name: my-app-n2
namespace: n2
labels:
app: my-app-n2
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n2
deployment yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n1
labels:
app: my-app-n1
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n1
template:
metadata:
labels:
app: my-app-n1
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
and
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n2
labels:
app: my-app-n2
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n2
template:
metadata:
labels:
app: my-app-n2
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
waiter:v1 corresponds to this repo: https://hub.docker.com/r/adamgardnerdt/waiter
Surely I can do this as namespaces are supposed to represent different environments? eg. nonprod vs. prod. So surely I can deploy identically into two different "environments" aka "namespaces"?
For Service you have specified namespaces , that is correct.
For Deployments you should also specify namespaces othervise they will go to default namespace.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n1
namespace: n1
labels:
app: my-app-n1
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n1
template:
metadata:
labels:
app: my-app-n1
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
I want to deploy the same image, on the same port but in different namespaces.
You are already doing that with your configs, except for deployment objects, that should refer to correct namespaces (as mentioned by answer from Ijaz Ahmad Khan), available to other services in the namespaces using DNS names my-app-n1 and my-app-n2 respectively.
Because waiter is a web server, I assume you would like to access both instances of it from the internet. Hence, you should:
change the type of both services to ClusterIP,
add ingress object, one per each namespace, containing a host name, e.g. myapp.com and staging.myapp.com respectively),
put a load balancer in front of your cluster: the load balancer will use ingress objects to know which hostname matches which service (your cloud provider should create a load balancer automatically).