Error: selector does not match template labels - kubernetes

My task is to add a label named "app" to all deployments, daemonsets, and cronjobs so that it's easier to query our apps across the stack in our monitoring tools. This way, we can build dashboards that use a single selector, namely app.
To avoid downtime I've decided to resolve this issue in the following steps:
Add labels to dev, test & stage environments.
Add labels to prod env's.
Deploy (1)
Deploy (2)
Delete old labels & update the services of dev to use the new labels. Then test & deploy. (currently on this step)
Repeat (5) for stage.
Repeat (5) for prod.
When using $ kubectl apply to update the resources I've added the "app" label to/replaced "service" label with "app" labels to, I run into the following error:
Error from server (Invalid): error when applying patch:
{longAssPatchWhichIWon'tIncludeButYaGetThePoint} to: &{0xc421b02f00
0xc420803650 default provisioning
manifests/prod/provisioning-deployment.yaml 0xc 42000c6f8 3942200
false} for: "manifests/prod/provisioning-deployment.yaml":
Deployment.apps "provisioning" is invalid: s
pec.template.metadata.labels: Invalid value:
map[string]string{"app":"provisioning", "component" :"marketplace"}:
selector does not match template labels
I need some insights on why it's throwing this error.

It seems you are in trouble. Check this section: Label selector updates
Note: In API version apps/v1, a Deployment’s label selector is immutable after it gets created.
So, this line say you can not update selector once deployment is created. Selector can not be changed for any API version except apps/v1beta1 and extension/v1beta1. Ref: TestDeploymentSelectorImmutability.
One possible workaround might be to keep the old labels and adding new labels along with old ones. This way, you don't have to update selector. Deployment will select pods using old labels but your dashboard can select using new labels. This might not meet your requirement but I don't see any better way.

This error is hard to read but it means that the labels specified in spec.template.metadata.labels of your Deployment definition do not match those of spec.selector.matchLabels within the same definition. Upload your YAML if you require further assistance.
Best!

There are a few ways to resolve this from what I can tell. One way is to delete the deployment and re-apply the deployment with a key/value that works on your deployment:
spec:
selector:
matchLabels:
app: app_name
template:
metadata:
labels:
app: app_name
-- whatever else --
This obviously incurs downtime but should be permanent. Your other option is to edit the deployment selector:
kubectl -n namespace edit deployment app-deployment
Then run your apply command again. This may or may not be permanent as I don't know what changed the selector to begin with.
If your pod doesn't even exist yet to do modifications, this error might be legit. You may have a pod with the same name in the same namespace.

Someone came to me with this issue and it turned out that they had typed "matadata" instead of "metadata" so as far as kubernetes was concerned the label wasn't defined, which lead to this error message.

Related

Helm installation throws an error for cluster-wide objects

I have a problem with helm chart that I would like to use to deploy multiple instances of my app in many namespaces. Let's say it would be ns1, ns2 and ns3.
I run helm upgrade with --install option and it goes well for ns1 but when I want to run it second time for ns2 I get an error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "my-psp" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ns2": current value is "ns1"
I found many topics about this problem but everytime the only answer is to delete the old object and install it again with helm. I don't want to do that - I would like to get two or more instances of my app that uses k8s objects that are common for many namespaces.
What can I do in this situation? I know I could change names of those objects with every deployment but that would be really messy. Second idea is to move those object to another chart and deploy it just once but sadly there's like a ton of work to do that so I would like to avoid it. Is it possible to ignore this error somehow and still make this work?
Found out the solution. The easiest way is to add lookup block into your templates:
{{- if not (lookup "policy/v1beta1" "PodSecurityPolicy" "" "my-psp") }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
...
{{- end }}
With this config the object will be created only if case that the object with the same name does not exist.
It may not be perfect solution but if you know what you do you can save a lot of time.

How to run multiple IngressController with same IngressClass?

Is it possible to run multiple IngressController in the same Namespace with the same IngressClass?
I have multiple IngressController with different LoadBalancer IP Addresses and would like to continue with this setup.
I upgraded the first IngressController to the latest version.
Updating the second/third/.. IngressController fails because of:
rendered manifests contain a resource that already exists. Unable to continue with update: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress-lb-02": current value is "nginx-ingress-lb-01"
Any idea how to fix this?
The issue you mention here is mainly with Helm, preventing you from overwriting some resources - your IngressClass - that belongs to another helm deployment.
One way to work around this may be to use helm "--dry-run" option. Once you have the list of objects written into a file: remove the IngressClass, then apply that file.
Another way may be to patch the chart deploying your controller. As a contributor to the Traefik helm chart, I know that we would install IngressClasses named after the Traefik deployment we operate. The chart you're using, for Nginx, apparently does not implement support for that scenario. Which doesn't mean it shouldn't work.
Now, answering your first question, is it possible to run multiple IngressController in the same Namespace with the same IngressClass: yes.
You may have several Ingress Controllers, one that watches for Ingresses in namespace A, another in namespace B, both ingresses referencing the same class. Deploying those ingresses into the same namespace is possible - although implementing NetworkPolicies, isolating your controllers into their own namespace would help in distinguishing who's who.
An options that works for me, when deploying multiple ingress controllers with Helm, is setting controller.ingressClassResource.enabled: false in every Helm deployment, except the first.
The comments in the default values.yaml aren't entirely clear on this, but after studying the chart I found that controller.ingressClassResource.enabled is only evaluated to determine whether or not to create the IngressClass, not whether or not to attach the controller.ingressClassResource.controllerValue to the controller. (This is true at least for helm-chart-4.0.13).
So, the first Helm deployment, if you don't override any of the default controller.ingressClassResource settings, the following values will be used to create an IngressClass, and attach the controllerValue to the controller:
controller:
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
For all other controllers that you want to run with the same IngressClass, simply set controller.ingressClassResource.enabled: false, to prevent Helm from trying (and failing) to create the IngressClass again.

How to make an environment variable different across two pods of the same deployment in kubernetes?

Based on this it is possible to create environment variables that are the same across all the pods of the deployment that you define.
Is there a way to instruct Kubernetes deployment to create pods that have different environment variables?
Use case:
Let's say that I have a monitoring container and i want to create 4 replicas of it. This container has a service that is mailing if an environment variables defines so. Eg, if the env var IS_MASTER is true, then the service proceeds to send those e-mails.
apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
template:
...
spec:
containers:
-env:
-name: IS_MASTER
value: <------------- True only in one of the replicas
(In my case I'm using helm, but the same thing can be without helm as well)
What you are looking for is, as far as I know, more like an anti-pattern than impossible.
From what I understand, you seem to be looking to deploy a scalable/HA monitoring platform that wouldn't mail X times on alerts, so you can either make a sidecar container that will talk to its siblings and "elect" the master-mailer (a StatefulSet will make it easier in this case), or just separate the mailer from the monitoring and make them talk to each other through a Service. That would allow you to load-balance both monitoring and mailing separately.
monitoring-1 \ / mailer-1
monitoring-2 --- > mailer.svc -- mailer-2
monitoring-3 / \ mailer-3
Any mailing request will be handled by one and only one mailer from the pool, but that's assuming your Monitoring Pods aren't all triggered together on alerts... If that's not the case, then regardless of your "master" election for the mailer, you will have to tackle that first.
And by tackling that first I mean adding a master-election logic to your monitoring platform, to orchestrate master fail-overs on events, there are a few ways to do so, but it really depends on what your monitoring platform is and can do...
Although, if your replicas are just there to extend compute power somehow and your master is expected to be static, then simply use a StatefulSet, and add a one liner at runtime doing if hostname == $statefulset-name-0 then MASTER, but I feel like it's not the best idea.
By definition, each pod in a deployment is identical to its other replicas. This is not possible in the yaml definition.
An optional solution will be to override the pod command and have it process and calculate the value of the variable, set the variable (export IS_MASTER=${resolved_value}) and trigger the default entrypoint for the container.
It means you'll have to figure out a logic to implement this (i.e. how does the pod know it should be IS_MASTER=true?). This is an implementation detail that can be done with a DB or other shared common resource used as a flag or semaphore.
All the Pod replicas in the deployment will have the same environment variables and no unique value to identify a particular Pod. Creating multiple Deployments is a better solution.
Not sure why, the OP is for only one Deployment. One solution is to use StatefulSets. The node names would be like web-0, web1, web-2 and so on. In the code check for the host name, if it is web-0 then send emails or else do something else.
It's a dirty solution, but I can't think of a better solution than creating multiple deployments.
One other solution is to use the same Helm Chart for both cases and run one helm deployment for each case. You can overwrite env variables with helm (using --set .Values.foo.deployment.isFirst= "0" or "1")
Please note that Helm/K8s will not allow you to POST the very same configuration twice.
So you will have to conditionally apply some Kubernetes specific configuration (Secrets, ConfigMaps, Secrets etc) on the first deployment only.
{{- if eq .Values.foo.deployment.isFirst "1" }}
...
...
{{- end }}

Can I modify container's environment variables without restarting pod using kubernetes

I have a running pod and I want to change one of it's container's environment variable and made it work immediately. Can I achieve that? If I can, how to do that?
Simply put and in kube terms, you can not.
Environment for linux process is established on process startup, and there are certainly no kube tools that can achieve such goal.
For example, if you make a change to your Deployment (I assume you use it to create pods) it will roll the underlying pods.
Now, that said, there is a really hacky solution reported under Is there a way to change the environment variables of another process in Unix? that involves using GDB
Also, remember that even if you could do that, there is still application logic that would need to watch for such changes instead of, as it usually is now, just evaluate configuration from envs during startup.
This worked with me
kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N
check the official documentation here
Another approach for runtime pods you can get into the Pod command line and change the variables in the runtime
RUN kubectl exec -it <pod_name> -- /bin/bash
Then
Run export VAR1=VAL1 && export VAR2=VAL2 && your_cmd
I'm not aware of any way to do it and I can't think of real world scenario where this makes too much sense.
Usually you have to restart a process for it to notice the changed environment variables and the easiest way to do that is restart the pod.
The solution closest to what seem to want is to create a deployment and then use kubectl edit (kubectl edit deploy/name) to modify it's environment variables. A new pod is started and the old one is terminated after you save.
Kubernetes is designed in such a way that any changes to the pod should be redeployed through the config. If you go messing with pods that have already been deployed you can end up with weird clusters that are hard to debug.
If you really want to you can run additional commands in your running pod using kubectl exec, but this is only recommended for debug purposes.
kubectl exec -it <pod_name> export VARIABLENAME=<thing>
If you are using Helm 3> according to the documentation:
Automatically Roll Deployments
Often times ConfigMaps or Secrets are
injected as configuration files in containers or there are other
external dependencies changes that require rolling pods. Depending on
the application a restart may be required should those be updated with
a subsequent helm upgrade, but if the deployment spec itself didn't
change the application keeps running with the old configuration
resulting in an inconsistent deployment.
The sha256sum function can be used to ensure a deployment's annotation
section is updated if another file changes:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
In the event you always want
to roll your deployment, you can use a similar annotation step as
above, instead replacing with a random string so it always changes and
causes the deployment to roll:
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
[...]
Both of these methods allow your Deployment to leverage the built in update strategy
logic to avoid taking downtime.
NOTE: In the past we recommended using the --recreate-pods flag as
another option. This flag has been marked as deprecated in Helm 3 in
favor of the more declarative method above.
It is hard to change from outside. But it is easy to change from inside. Your App running in the pod can change it. Just oppose an Api to change environment variable.
You can use configmap with volumes to update environment variables on the go..
Refer: https://itnext.io/how-to-automatically-update-your-kubernetes-app-configuration-d750e0ca79ab

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.