I wonder if it is possible to change labels of pods on the fly so services route requests to those pods based on new labels.
For example I have two services A and B. Then I have 10 pods, where 5 have label type = A (matches service A) and the other 5 have label type = B (matches service B). At some point I want to change labels on pods to achieve a configuration of 2 with label type = A and 8 with label type = B.
I want to know if I can just change the labels and services will be updated accordingly without having to stop and start new pods with different labels.
You can change the labels on individual pods using the kubectl label command, documented here.
Changing the label of a running pod should not cause it to be restarted, and services will automatically detect and handle label changes.
So in other words, yes you can :)
The steps are As follows:
Create two deployments with a label for each and mention number of pods you wish to have In them.
Deploytask1.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: task1deploy
spec:
replicas: 5
template:
metadata:
labels:
app: task1deploy
spec:
containers:
- name: nodetask1
Deploy2task1.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: task1deploy2
spec:
replicas: 5
template:
metadata:
labels:
app: task1deploy2
spec:
containers:
- name: node2task1
image: nginx
ports:
- containerPort: 80
2.Create two services :
kubectl expose deployment task1deploy --namespace=shifali --type=LoadBalancer --name=my-service
kubectl expose deployment task1deploy2 --namespace=shifali --type=LoadBalancer --name=my-service2
3.When you describe these services you will find 5 endpoints to each(that is the pods):
kubectl describe service my-service --namespace=shifali
Name: task1deploy
Endpoints: 10.32.0.12:80,10.32.0.7:80,10.32.0.8:80 + 3 more...
And similarly for service2
6.Now remove the label of pod new11 and add label “app=task1deploy2”
kubectl label pods new11-68dfd7d4c8-64xhq --namespace=shifali app-
kubectl label pods new11-68dfd7d4c8-64xhq "app=task1deploy2" --namespace=Shifali
Now the services will show variation in the number of target ports (my_service=5 and my_service2=7)
kubectl describe service my-service --namespace=Shifali
Endpoints:10.32.0.7:80,10.32.0.8:80,10.32.1.7:80 + 2 more..
kubectl describe service my-service2 --namespace=Shifali
Name: my-service2
Endpoints: 10.32.0.10:80,10.32.0.12:80,10.32.0.9:80 + 4 more...
Related
I have a question about labels and names, in this example manifest file
apiVersion: apps/v
1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
I can see that the name of the deployment is "nginx-deployment" and the pod name is "nginx"? or is it the running container?
Then I see in the console that the pods would have a hash attached to the end of its name, I believe this is the revision number?
I just want to decipher the names from the labels from the matchLables, so for example I can use this service manifest to expose the pods with a certain label:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 60000
targetPort: 80
will this service expose all pods with the selector : app:nginx ?
thanks
The Deployment has your specifed name "nginx-deployment".
However you did not define a Pod with a fixed name, you define a template for the pods managed by this deployment.
The Deployment manages 3 pods (because of your replicas 3), so it will use the template to build this three pods.
There will also be a Replica Set with a hash and this will manage the Pods, but this is better seen by following the example below.
Since a deployment can manage multiple pods (like your example with 3 replicas) or needing one new Pod when updating them, a deployment will not exactly use the name specified in the template, but will always append a hash value to keep them unique.
But now you would have the problem to have all Pods loadbalanced behind one Kubernetes Service, because they have different names.
This is why you denfine a label "app:nignx" in your template so all 3 Pods will have this label regardless of there name and other labels set by Kubernetes.
The Service uses the selector to find the correct Pods. In your case it will search them by label "app:nginx".
So yes the Service will expose all 3 Pods of your deployment and will loadbalance trafic between them.
You can use --show-labels for kubectl get pods to see the name and the assigned labels.
For a more complete example see:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
I have a deployment file with replicas set to 1. So when I do 'kubectl get ...' I get 1 record each for deployment, replicaset and pod.
Now I set replicas to 2 in deployment.yaml, apply it and when I run 'kubectl get ..' command, I get 2 records each for deployments, replicaset and pods each.
Shouldn't previous deployment be overwritten, hence resulting in a single deployment, and similar for replicaset(2 pods are ok as now replicas is set to 2)?
This is deployment file content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
ports:
- containerPort: 80
...Now I set replicas to 2 in deployment.yaml, apply it and when I run 'kubectl get ..' command, I get 2 records each for deployments, replicaset and pods each.
Can you try kubectl get deploy --field-selector metadata.name=nginx-deployment. You should get just 1 deployment. The number of pods should follow this deployment's replicas.
Both replica set and deployment have the attribute replica: 3, what's the difference between deployment and replica set? Does deployment work via replica set under the hood?
configuration of deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
my-label: my-value
spec:
replicas: 3
selector:
matchLabels:
my-label: my-value
template:
metadata:
labels:
my-label: my-value
spec:
containers:
- name: app-container
image: my-image:latest
configuration of replica set
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
labels:
my-label: my-value
spec:
replicas: 3
selector:
matchLabels:
my-label: my-value
template:
metadata:
labels:
my-label: my-value
spec:
containers:
- name: app-container
image: my-image:latest
Kubernetes Documentation
When to use a ReplicaSet
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all.
This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.
Deployment resource makes it easier for updating your pods to a newer version.
Lets say you use ReplicaSet-A for controlling your pods, then You wish to update your pods to a newer version, now you should create Replicaset-B, scale down ReplicaSet-A and scale up ReplicaSet-B by one step repeatedly (This process is known as rolling update). Although this does the job, but it's not a good practice and it's better to let K8S do the job.
A Deployment resource does this automatically without any human interaction and increases the abstraction by one level.
Note: Deployment doesn't interact with pods directly, it just does rolling update using ReplicaSets.
A ReplicaSet ensures that a number of Pods is created in a cluster. The pods are called replicas and are the mechanism of availability in Kubernetes.
But changing the ReplicaSet will not take effect on existing Pods, so it is not possible to easily change, for example, the image version.
A deployment is a higher abstraction that manages one or more ReplicaSets to provide a controlled rollout of a new version. When the image version is changed in the Deployment, a new ReplicaSet for this version will be created with initially zero replicas. Then it will be scaled to one replica, after that is running, the old ReplicaSet will be scaled down. (The number of newly created pods, the step size so to speak, can be tuned.)
As long as you don't have a rollout in progress, a deployment will result in a single replicaset with the replication factor managed by the deployment.
I would recommend to always use a Deployment and not a bare ReplicaSet.
One of the differences between Deployments and ReplicaSet is changes made to container are not reflected once the ReplicaSet is created
For example:
This is my replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-replicaset
spec:
selector:
matchLabels:
app: nginx
replicas: 5 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.2
ports:
- containerPort: 80
I will apply this replicaset using this command
kubectl apply -f replicaset.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
So from pod definition, we can observe that nginx is using 1.13.2. Now let's change image version to 1.14.2 in replicaset.yaml
Again apply changes
kubectl apply -f replicaset.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
Now we don't see any changes in Pod and they are still using old image.
Now let us repeat the same with a deployment (deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 5 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.2
ports:
- containerPort: 80
I will apply this deployment using this command
kubectl apply -f deployment.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
Change the deployment.yaml file with some other version of nginx image
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 5 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I will again apply this deployment using this command
kubectl apply -f deployment.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
Now we can see these pods and we can see updated image in the description of pod
TLDR:
Deployment manages -> Replicatset manages -> pod(s) abstraction of
-> container (e.g docker container)
A Deployment in Kubernetes is a higher-level abstraction that represents a set of replicas of your application. It ensures that your desired number of replicas of your application are running and available.
A ReplicaSet, on the other hand, is a lower-level resource that is responsible for maintaining a stable set of replicas of your application. ReplicaSet ensures that a specified number of replicas are running and available.
Example:
Suppose you have a web application that you want to run on 3 replicas for high availability.
You would create a Deployment resource in Kubernetes, specifying the desired number of replicas as 3 pods. The deployment would then create and manage the ReplicaSet, which would in turn create and manage 3 replicas of your web application pod.
In summary, Deployment provides higher-level abstractions for scaling, rolling updates and rolling back, while ReplicaSet provides a lower-level mechanism for ensuring that a specified number of replicas of your application are running.
On my deployment I can set replicas: 3 but then it only spins up one pod. Its my understanding that kubernetes will then fire up more pods as needed, up to three.
But for the sake of uptime I want to be able to have a minimum of three pods at all times and for it to maybe create more as needed but to never scale down lower than 3.
Is this possible with a Deployment?
It is exactly as you did, you define 3 replicas in your Deployment.
Have you verified that you have enough resources for your Deployments?
Replicas example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 # <------- The number of your desired replicas
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
K8S will spin up a new pod if it can do it meaning, if you have enough resources, the resources are valid (like nodes, selectors, image, volumes, configuration, and more)
If you have defined 3 replicas and you still getting only 1, examine your deployment or your events.
How to view events
# To view all the events (don't specify pod name or namespace)
kubectl get events
# Get event for specific pod
kubectl describe event <pod name> --namespace <namespace>
I want to deploy a Postgres service with replication on Kubernetes cluster.
I have defined a PetSet and a Service for this. But i am only able to define same resource limits to all pods in a service, due to which Kubernetes assigns these nodes randomly to nodes.
Is there a way, where i can have a service with different pods resource conf ?
My current yaml for reference.
https://github.com/kubernetes/charts/blob/master/incubator/patroni/templates/ps-patroni.yaml
You cannot assign different configuration options (i.e. resource limits) to pods in the same Replica. Inherently they are meant to be identical. You would need to create multiple replicas in order to accomplish this.
for a service you could have multiple deployments with different nodeSelector configurations behind one Service definition.
E.g. you could label your nodes like this:
kubectl label nodes node-1 pool=freshhardware
kubectl label nodes node-2 pool=freshhardware
kubectl label nodes node-3 pool=shakyhardware
And then have two deployments like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 4
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
nodeSelector:
pool: freshhardware
... the second one could look the same with just these fields exchanged:
nodeSelector:
pool: shakyhardware
A service definition like this would then take all pods from both deployments into account:
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
The drawback is of course that you'd have to manage two Deployments at a time, but that's kind of build-in to this problem anyways.