I'm trying to remove a key/value pair from an existing deployment's spec.selector.matchLabels config. For example, I'm trying to remove the some.old.label: blah label from spec.selector.matchLabels and spec.template.metadata.labels. So this is an excerpt of what I'm sending to kubectl apply -f:
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
but that gives me the following error:
selector does not match template labels
I also tried kubectl replace, which gives me this error:
v1.LabelSelector{MatchLabels:map[string]string{“app”: "my-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
which makes sense once I checked the deployment's config in prod:
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
# my config is trying to mutate the matchLabels here:
{"apiVersion":"apps/v1", ... "selector":{"matchLabels":{"app":"my-app"} ... }
# etc...
spec:
selector:
matchLabels:
app: my-app
some.old.label: blah # how do I remove this label from both the metadata.labels and matchLabels?
template:
metadata:
labels:
app: my-app
some.old.label: blah # I want to remove this label
Notice how the some.old.label: blah key/value is set under selector.matchLabels and template.metadata.labels.
Will I have to delete-then-recreate my deployment? Or perhaps call kubectl replace --force?
Notes
I came across this section in the Kubernetes Deployment docs:
Selector removals removes an existing key from the Deployment selector -- do not require any changes in the Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the removed label still exists in any existing Pods and ReplicaSets.
as well as this PR and this Github issue which speak about the reasoning behind the problem, but I can't figure out how I'm can safely update my deployment to remove this selector.
When the error message says "field is immutable", it means you can't change it once it's been set. You need to delete and recreate the Deployment with the label selector you want (which will also temporarily delete all of the matching Pods).
kubectl delete deployment my-app
kubectl apply -f ./deployment.yaml
Related
In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels? If they didn't match, any pod created wouldn't match the selector, and I'd imagine K8s would go on creating new pods until every node is full. If that's true, why does K8s want us to specify the same labels twice? Either I'm missing something or this violates the DRY principle.
The only thing I can think of would be creating a Deployment with matchLabels "key: A" & "key: B" that simultaneously puts existing/un-owned pods that have label "key: A" into the Deployment while at the same time any new pods get label "key: B". But even then, it feels like any label in the template metadata should automatically be in the selector matchLabels.
K8s docs give the following example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
...In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels?
Example when doing canary deployment.
...If they didn't match, any pod created wouldn't match the selector, and I'd imagine K8s would go on creating new pods until every node is full.
Your deployment will not proceed, it will fail with error message "selector" does not match template "labels". No pod will be created.
...it feels like any label in the template metadata should automatically be in the selector matchLabels.
Labels under template.metadata are used for many purposes and not only for deployment, example labels added by CNI pertain to IP on the fly. Labels meant for selector should be minimum and specific.
...In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels?
Labels under spec.selector.matchLabels should match ones under spec.template.metadata.labels. You can have labels under spec.template.metadata.labels that are not present under spec.selector.matchLabels.
I keep getting the below error inconsistently on one of my services' endpoint object. : "Failed to update endpoint default/myservice: Operation cannot be fulfilled on endpoints "myservice": the object has been modified; please apply your changes to the latest version and try again". I am sure I am not editing the endpoint object manually because all my Kubernetes objects are deployed through helm3 charts. But it keeps giving the same error. It goes away if I delete and recreate the service. Please help/give any leads as to what could be the issue.
Below is my service.yml object from the cluster:
kind: Service
apiVersion: v1
metadata:
name: myservice
namespace: default
selfLink: /api/v1/namespaces/default/services/myservice
uid: 4af68af5-4082-4ffb-b11b-641d16b28f31
resourceVersion: '1315842'
creationTimestamp: '2020-08-13T11:00:53Z'
labels:
app: myservice
app.kubernetes.io/managed-by: Helm
chart: myservice-1.0.0
heritage: Helm
release: vanilla
annotations:
meta.helm.sh/release-name: vanilla
meta.helm.sh/release-namespace: default
spec:
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
selector:
app: myservice
clusterIP: 10.0.225.85
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
Inside the Kubernetes system is a control loop which evaluates the selector of every Service and saves the results into a corresponding Endpoints object. So a good place to debug if your service side is fine is to look at the Pods being selected by the Service. Selector labels should be the labels defined on pods.
kubectl get pods -l app=myservice
If you get results, look at the RESTARTS column if pods are restarting, if pods are restarting there could be intermittent connectivity issues.
If you are not getting results it could be due to wrong selector labels. Verify labels on pods by running the command
kubectl get pods -A --show-labels
A good point of reference is https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
It's common behavior and might happen when you try to deploy resources by copy-pasting manifests including metadata fields like creationTimeStamp, resourceVersion, selfLink etc.
Those fields are generated before the object is persisted. It appears when you attempt to update the resource that has been already updated and the version has changed so it refuses to update it. The solution is to check your yamls and apply must-have objects without specifying fields populated by the system.
Below is my kubernetes deployment file -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: boxfusenew
labels:
app: boxfusenew
spec:
replicas: 1
template:
metadata:
labels:
app: boxfusenew
spec:
containers:
- image: sk1997/boxfuse:latest
name: boxfusenew
ports:
- containerPort: 8080
In this deployment file under container tag boxfusenew pod name is specified. So I want the pod generated by deployment file should have the boxfusenew name but the deployment is attaching some random value to it as- boxfusenew-5f6f67fc5-kmb7z.
Is it possible to ignore random values in pod name through deployment file??
Not really, unless you create the Pod itself and not a deployment.
According to Kubernetes documentation:
Each object in your cluster has a Name that is unique for that type of resource. Every Kubernetes object also has a UID that is unique across your whole cluster.
For example, you can only have one Pod named myapp-1234 within the same namespace, but you can have one Pod and one Deployment that are each named myapp-1234.
For non-unique user-provided attributes, Kubernetes provides labels and annotations.
If you create a Pod with a specific unique label, you can use this label to query the Pod, so no need of having the exact name.
You can use a jsonpath to query the values that you want from your Pod under that specific deployment. I've created an example that may give you an idea:
kubectl get pods -o=jsonpath='{.items[?(#.metadata.labels.app=="boxfusenew")].metadata.name}'
This would return the name of the Pod which contains the label app=boxfusenew. You can take a look into some other examples of jsonpath here and here.
First what kind of use case that you want to achieve? If you want to simply get available pods belongs to certain deployment you can use label and selector. For example:
kubectl -n <namespace> get po -l <key>=<value>
I've gone over the following docomentation page: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
The example deployment yaml is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
We can see here three different times where the label app: nginx is mentioned.
Why do we need each of them? I had a hard time understanding it from the official documentation.
The first label is for deployment itself, it gives label for that particular deployment. Lets say you want to delete that deployment then you run following command:
kubectl delete deployment -l app=nginx
This will delete the entire deployment.
The second label is selector: matchLabels which tells the resources(service etc) to match the pod according to label. So lets say if you want to create the service which has all the pods having labels of app=nginx then you provide following definition:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
The above service will look for the matchLabels and bind pods which have label app: nginx assigned to them
The third label is podTemplate labels, the template is actually podTemplate. It describe the pod that it is launched. So lets say you have two replica deployment and k8s will launch 2 pods with the label specified in template: metadata: labels. This is subtle but important difference, so you can have the different labels for deployment and pods generated by that deployment.
First label:
It is deployment label which is used to select deployment. You can use below command using first label:
kubectl get deployment -l app=nginx
Second Label:
It is not a label . It is label selector to select pod with labels nginx. It is used by ReplicaSet.
Third Label:
It is pod label to identify pods. It is used by ReplicaSet to maintain desired num of replica and for that label selector is used.
Also it is used to selects pod with below command:
kubectl get pods -l app=nginx
As we know it, the labels are to identify the resources,
First label identifies the Deployment itself
Third one is falls under the Pod template section. So, this one is specific to the Pod.
Second one i.e the matchLabels is used to tell Services, ReplicaSet and other resources to act on the resources on the specified label conditions.
While first and third ones are label assignment to Deployment and Pods respectively, the second one is matching condition expression rather than assignment.
Though all 3 have same labels in the real world examples, First one can be different than second and third ones. But, second and third one usually to be identical as the second is the conditional expression that acts upon third one.
.metadata.labels is for labeling the deployment object itself, you don't necessarily need it, but like other answers said, it helps you organize objects.
.spec.selector tells the deployment(under the hood it is the ReplicaSet object) how to find the pods to manage. For your example, it will manage pods with label app: nginx.
But how do you tell the ReplicaSet controller to create pods with that label in the first place? You define that in the pod template, .spec.template.metadata.labels.
I have a docker-compose.yml file we have been using to set up our development environment.
The file declares some services, all of them more or less following the same pattern:
services:
service_1:
image: some_image_1
enviroment:
- ENV_VAR_1
- ENV_VAR_2
depends_on:
- another_service_of_the_same_compose_file
In the view of migrating to kubernetes, when running:
kompose convert -f docker-compose.yml
produces, for each service, a pair of deployment/service manifests.
Two questions about the deployment generated:
1.
the examples in the official documentation seem to hint that the selector field is needed for a Deployment to be aware of the pods to manage.
However the deployment manifests created do not include a selector field, and are as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.6.0 (e4adfef)
creationTimestamp: null
labels:
io.kompose.service: service_1
name: service_1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: service_1
spec:
containers:
- image: my_image
name: my_image_name
resources: {}
restartPolicy: Always
status: {}
2.
the apiVersion in the generated deployment manifest is extensions/v1beta1, however the examples in the Deployments section of the official documentation default to apps/v1.
The recommendation seems to be
for versions before 1.9.0 use apps/v1beta2
Which is the correct version to use? (using kubernetes 1.8)
Let's begin by saying that Kubernetes and Kompose are two different independent systems. Kompose is trying to match all of the dependency with kubernetes.
At the moment all of the selector's fields are generated by kubernetes.In future, It might be done by us.
If you would like to check your selector's fields use following commands
kubectl get deploy
kubectl describe deploy DEPLOY_NAME
After version k8s 1.9 all of the long-running objects would be part of /apps group.
We’re excited to announce General Availability (GA) of the apps/v1 Workloads API, which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Note that the Batch Workloads API (Job and CronJob) is not part of this effort and will have a separate path to GA stability.
I have attached the link for further research
kubernetes-19-workloads
As a selector field isn't required for deployments and Kompose doesn't know your cluster's nodes, it doesn't set a selector (which basically tells k8s in which nodes run pods).
I wouldn't edit apiversion cause Kompose assumes that version defining the rest of the resource. Also, if you are using kubernetes 1.8 read 1.8 docs https://v1-8.docs.kubernetes.io/docs/
In kubernetes 1.16 the deployment's spec.selector became required. Kompose (as of 1.20) does not yet do this automatically. You will have to add this to every *-deployment.yaml file it creates:
selector:
matchLabels:
io.kompose.service: alignment-processor
If you use an IDE like jetbrains you can use the following search/replace patterns on the folder where you put the conversion results:
Search for this multiline regexp:
io.kompose.service: (.*)
name: \1
spec:
replicas: 1
template:
Replace with this pattern:
io.kompose.service: $1
name: $1
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: $1
template:
The (.*) captures the name of the service, the \1 matches the (first and only) capture, and the $1 substitutes the capture in the replacement.
You will also have to substitute all extensions/v1beta1 with apps/v1 in all *-deployment.yaml files.
I also found that secrets have to massaged a bit but this goes beyond the scope of this question.