Which names should be same in this k8s yaml - kubernetes

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
labels:
run: my-app
spec:
replicas: 3
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
name: my-app
ports:
- containerPort: 8080
This is a sample yaml from kubenetes site, there are so many my-app, do they all have to be same? what are their purpose?

This is a sample yaml from kubenetes site, there are so many my-app, do they all have to be same? what are their purpose?
No they don't have to be the same as far as the name field goes, that can be different. The my-app references seen in the metadata and selector sections
are labels that can be used to glue the different Kubernetes objects together or simply select a subset of objects when querying Kubernetes. They will sometimes be the same.
Depending on how you've created the Deployment you may have run: myapp throughout the Deployment and in the objects derived from it. Using kubectl run my-app --image=gcr.io/google-samples/hello-app:1.0 --replicas=3 would create a identical Deployment you're referring to.
Here's a picture showing how the different run: my-app labels are used, using the Deployment above as an inspiration:
The picture above shows you the Deployment and how the template box (blue) are used to create the number of specified replicas (Pods). Each Pod will get a run: my-app label in it's metadata section, from the Deployment point of view this will be used as a way of selecting the Pods it's responsible for.
A similar selection of a subset of Pods using kubectl would be:
kubectl get pods -l run=my-app
This will give you all Pods labeled run: my-app.
To sum up a bit, labels can be used to select a subset of resources when querying using e.g. kubectl or by other Kubernetes resources to do selections. You can create your own labels and they don't necessarily have to be the same throughout your specific Deployment but if they are it would be pretty easy to query for any resource with a specific label.

Personally, I think it can be helpful for checking pods grouping information.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app <--- Deployment object name, you can change it.
labels:
run: my-app <--- It is helpful for the management, e.g.> Deleting same label one
spec:
replicas: 3
selector:
matchLabels:
run: my-app <--- What labels are controlled over by this deployment object.
template:
metadata:
labels:
run: my-app <--- Yeah, it's pod's label. It can be used of grouping with other objects
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
name: my-app
ports:

Related

How to (re-)name a pod in a K8s deployment?

I want to deploy two containers in a pod through a deployment. But I want the pod to have exactly the name yoda. But in my case, a random string is always append after yoda like that yoda-f8bcb7bf4-khml6. Is it possible to force the pod name? I try the following but I did not get what I expected.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: yoda
name: yoda
spec:
replicas: 1
selector:
matchLabels:
app: yoda
strategy: {}
template:
metadata:
creationTimestamp: null
name: yoda
labels:
app: yoda
spec:
containers:
- image: busybox
name: anakin
resources: {}
- image: nginx
name: obiwan
resources: {}
status: {}
Regards,
BenoƮt
This may not be the answer you expect but with Kubernetes pods should not be seen as pets, i. e. they should not receive a lot of attention but considered as highly replaceable. The name generation is part of this consideration among others to avoid conflicts.
Almost all ways of Kubernetes involve a kind of decoupling, including container rollouts. If a pod always receives the same name it cuts itself from things like rolling deployment strategies, in which on pod terminates while another spawns. Alternatively a conflict would be the alternative.
Without a deeper discussion why the pod should be maintained by hand I am not sure you will find a proper solution.
To give some perspective:
Labels (which you already use) give a good way to select a certain pod. If you change the deployment with a different image there might be two pods be selectable with your yoda label.
So, if you want to select either the older or the newer pod (but not both), adding another label with the respective version could solve the distinguishing problem (if that is what you want). See the template metadata section below.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: yoda
name: yoda
spec:
replicas: 1
selector:
matchLabels:
app: yoda
strategy: {}
template:
metadata:
name: yoda
labels:
app: yoda
app.version: 2.0.0
spec:
containers:
- image: busybox
name: anakin
resources: {}
- image: nginx
name: obiwan
resources: {}
I hope this helps.
I am not sure if statefulset can solve your issue. But the statefulset always retain pod name.How ever it also append a numeric number(start from 0) after the pod name & goes upto no of replicas you define in the yaml definition file.
For example, if you define the replica count to 3 in statefulset definition yaml file, then pod's name will be listed below.
[podName]-0
[podName]-1
[podName]-2

Kubernetes non specific spec.selector does not prevent Kubernetes from working correctly

I've experienced a surprising behavior when playing around with Kubernetes and I wanted to know if there is any good explanation behind it.
I've noticed that when two Kubernetes deployments are created with the same labels, and with the same spec.selector, the deployments still function correctly, even though using the same selector "should" cause them to be confused regarding which pods is related to each one.
Example configurations which present this -
example_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
extra_label: one
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
example_deployment_2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx
extra_label: two
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I expected the deployments not to work correctly, since they will select pods from each other and assume it is theirs.
The actual result is that the deployments seem to be created correctly, but entering the deployment from k9s returns all of the pods. This is true for both deployments.
Can anyone please shed light regarding why this is happening? Is there additional internal filtering in Kubernetes to to prevent pods which were not really created by the deployment from being associated with it?
I'll note that I've seen this behavior in AWS and have reproduced it in Minikube.
When you create a K8S Deployment, K8S creates a ReplicaSet to manage the pods, then this ReplicaSet creates the pods based on the number of replicas provided or patched by the hpa. Addition to the provided labels and annotations you provide, the ReplicaSet add ownerReferences which contains its name and uid, so even if you have 4 pods with the same labels, each two pods will have a different ownerReferences used by the ReplicaSet to manage them:
apiVersion: v1
kind: Pod
metadata:
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: <replicaset name>
uid: <replicaset uid>
...

what's the differences between the different value for tag strategy in k8s yaml file

i test with two yaml file, that only different in tag strategy
first one:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
test.k8s: test
name: test
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
test.k8s: test
spec:
containers:
- name: test
image: alpine3.6
imagePullPolicy: IfNotPresent
...
the second:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
test.k8s: test
name: test
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
test.k8s: test
spec:
containers:
- name: test
image: alpine3.6
imagePullPolicy: IfNotPresent
...
then I update the deployment with kubectl patch and kubectl replace command.
it seems only the new pod start time different.
and the old pod will be terminated at the end under the two conditions when the new pod start failed with missing image.
does anyone knows about it?
many thanks~
Basically, .spec.strategy tag specifies the way how the cluster engine replaces old Pods with new ones.
In your case, .spec.strategy.type==Recreate tag tells cluster engine to terminate (kill) all existing Pods before new ones are created.
As for the second example, .spec.strategy.type==RollingUpdate tag describes approach to update a service without a temporary outage, as it concerns to update one pod per time to avoid service unavailability.
From your example, there are two parameters which define RollingUpdate strategy:
.spec.strategy.rollingUpdate.maxUnavailable - indicates the maximum number of Pods that can be unavailable during the update process.
.spec.strategy.rollingUpdate.maxSurge - specifies the maximum number of Pods that can be created over the desired number of Pods.
There are several additional parameters which you can consider to use in RollingUpdate, for more information, refer to the Documentation.
By using kubectl replace command you recreate strategy and rebuild object, but not update.

Is it possible to move the running pods from ReplicationController to a Deployment?

We are using RC to run our workload and want to migrate to Deployment. Is there a way to do that with out causing any impact to the running workload. I mean, can we move these running pods under Deployment?
Like, #matthew-l-daniel answered, the answer is yes. But I am more than 80% certain about it. Because I have tested it
Now whats the process we need to follow
Lets say I have a ReplicationController.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Question: can we move these running pods under Deployment?
Lets follow these step to see if we can.
Step 1:
Delete this RC with --cascade=false. This will leave Pods.
Step 2:
Create ReplicaSet first, with same label as ReplicationController
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
---
So, now these Pods are under ReplicaSet.
Step 3:
Create Deployment Now with same label.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
----
And Deployment will find one ReplicaSet already exists and our job is done.
Now we can check increasing replicas to see if it works.
And It works.
Which way It doesn't work
After deleting ReplicationController, do not create Deployment directly. This will not work. Because, Deployment will find no ReplicaSet, and will create new one with additional label which will not match with your existing Pods
I'm about 80% certain the answer is yes, since they both use Pod selectors to determine whether new instances should be created. The key trick is to use the --cascade=false (the default is true) in kubectl delete, whose help even speaks to your very question:
--cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.
By deleting the ReplicationController but not its subordinate Pods, they will continue to just hang out (although be careful, if a reboot or other hazard kills one or all of them, no one is there to rescue them). Creating the Deployment with the same selector criteria and a replicas count equal to the number of currently running Pods should cause a "no action" situation.
I regret that I don't have my cluster in front of me to test it, but I would think a small nginx RC with replicas=3 should be a simple enough test to prove that it behaves as you wish.

Re Scheduling pods from one node to another

So, I am writing a custom auto-rescheduler for my clusters and I am using Python Client library to do so. As the rescheduler is still in proposal and nothing has been done for it, the only known way is to delete the pod from overused node and let the replication controller and scheduler take care of the rest (make a new pod and assign it to an appropriate node). What I want to know is can I use the client library to move the pods from one node to another without deleting the pod. Basically, I want to create a pod in an appropriate node first and then delete the pod in the over-used node. Is that possible?
Using node label you can start the container in matching nodes. for this first you need set the node label and update the deployment file and apply it.
Here is the sample yml file I used for blue green deployment, see this help.
web server running on node labeled web
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver-blue
spec:
replicas: 2
template:
metadata:
labels:
type: webserver
color: blue
spec:
containers:
- image: nginx:1.12.0
name: webserver-container
ports:
- containerPort: 80
name: http-server
nodeSelector:
svrtype: web
set another node label as newweb and update and deployment with different name and node label the config.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver-green
spec:
replicas: 2
template:
metadata:
labels:
type: webserver
color: green
spec:
containers:
- image: nginx:1.13.0
name: webserver-container
ports:
- containerPort: 80
name: http-server
nodeSelector:
svrtype: newweb
After testing you can remove the old one. the issue here is you can direct the traffic to only one deployment at a time.