How to delete an existing label with helm upgrade - kubernetes-helm

I have an existing deployment that has the label importance: normal in spec/template/metadata/labels (all the pods spawned from this deployment have that label in them).
I want to be able to remove that label when a helm upgrade is performed.
I tried the following trying to use the --set importance-{} flag but get an error.
Command I tried:
helm upgrade --install echo service-standard/service-standard --namespace qa --set importance-{} -f ./helm-chart/values.shared.yaml --wait --timeout 600s
Error it returns:
Error: failed parsing --set data: key "importance-{}" has no value
Here is the snippet of the deployment that I am trying to remove the label from - The label is in the first spec block (not the second) right before app: echo-selector :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "8"
creationTimestamp: "2022-12-14T15:24:04Z"
generation: 9
labels:
app.kubernetes.io/managed-by: Helm
name: echo-deployment
spec:
replicas: 2
revisionHistoryLimit: 5
template:
metadata:
annotations:
linkerd.io/inject: enabled
creationTimestamp: null
labels:
app: echo-selector
importance: normal
version: current
spec:
containers:
- env:
- name: TEST
Any help or advice is greatly appreciated!!!!

Related

kubernetes pod deployment not updating

I have a pod egress-operator-controller-manager created from makefile by command make deploy IMG=my_azure_repo/egress-operator:v0.1.
This pod was showing unexpected status: 401 Unauthorized error in description, so I created imagePullSecrets and trying to update this pod with secret by creating pod's deployment.yaml [egress-operator-manager.yaml] file. But when I am applying this yaml file its giving below error:
root#Ubuntu18-VM:~/egress-operator# kubectl apply -f /home/user/egress-operator-manager.yaml
The Deployment "egress-operator-controller-manager" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"moduleId":"egress-operator"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
egress-operator-manager.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: egress-operator-controller-manager
namespace: egress-operator-system
labels:
moduleId: egress-operator
spec:
replicas: 1
selector:
matchLabels:
moduleId: egress-operator
strategy:
type: Recreate
template:
metadata:
labels:
moduleId: egress-operator
spec:
containers:
- image: my_azure_repo/egress-operator:v0.1
name: egress-operator
imagePullSecrets:
- name: mysecret
Can somene let me know that how can I update this pod's deployment.yaml ?
Delete the deployment once and try applying the YAML agian.
it could be due to K8s service won't allow the rolling update once deployed the label selectors of K8s service can not be updated until you decide to delete the existing deployment
Changing selectors leads to undefined behaviors - users are not expected to change the selectors
https://github.com/kubernetes/kubernetes/issues/50808

nodejs & kubernetes : How to do a deployment rollout restart (nodejs)

I'd like to do a rollout restart using nodejs, but I can't find the relevant API :
kubectl rollout restart deployment/myApp
Could you please help?
P.S : I'd like to avoid the scaling to 0 and then to N.
This is usually nothing that you should have to do. But what you can do is to add or update and annotation in the template:-part of the Deployment, this will trigger a new deployment.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
annotations:
dummy: hello # this is a new annotation

Update deployment labels using “kubectl patch” does not work in v1.18

I am trying to update a label using kubectl v1.18.
I tried kubectl patch deployment my-deployment --patch "$(cat patch1.yaml)"; it returns an error
The Deployment "my-deployment" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
client: user
name: my-deployment
spec:
replicas: 1
selector:
matchLabels:
app: revproxy
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
template:
metadata:
creationTimestamp: null
labels:
app: revproxy
spec:
containers:
- image: nginx:1.7.9
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
The patch yaml is
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
From the docs here
Note: In API version apps/v1, a Deployment's label selector is
immutable after it gets created.
Motivations for making label selector immutable are
Changing selectors leads to undefined behaviors - users are not expected to change the selectors
Having selectors immutable ensures they always match created children, preventing events such as accidental bulk orphaning
If you want to make modification to label selector you will have to delete the exiting deployment and recreate it.
Modification to only metadata.labels should work though.

configmap change doesn't reflect automatically on respective pods

apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3 # tells deployment to run 3 pods matching the template
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice1
spec:
containers:
- name: consoleservice
image: chintamani/insightvu:ms-console1
readinessProbe:
httpGet:
path: /
port: 8385
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /deploy/config
name: config
volumes:
- name: config
configMap:
name: console-config
For creating configmap I am using this command:
kubectl create configmap console-config --from-file=deploy/config
While changing in configmap it doesn't reflect automatically, every time I have to restart the pod. How can I do it automatically?
thank you guys .Able to fix it ,I am using reloader to reflect on pods if any changes done inside
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
then add the annotation inside your deployment.yml file .
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
annotations:
configmap.reloader.stakater.com/reload: "console-config"
It will restart your pods gradually .
Pod and configmap are completely separate in Kubernetes and pods don't automatically restart itself if there is a configmap change.
There are few alternatives to achieve this.
Use wave, it's a Kubernetes controller which look for specific annotation and update the deployment if there is any change in configmap https://github.com/pusher/wave
Use of https://github.com/stakater/Reloader, reloader can watch configmap changes and can update the pod to pick the new configuration.
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
spec:
template:
metadata:
You can add a customize configHash annotation in deployment and in CI/CD or while deploying the application use yq to replace that value with hash of configmap, so in case of any change in configmap. Kubernetes will detect the change in annotation of deployment and reload the pods with new configuration.
yq w --inplace deployment.yaml spec.template.metadata.annotations.configHash $(kubectl get cm/configmap -oyaml | sha256sum)
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: application
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3
template:
metadata:
labels:
app: consoleservice1
annotations:
configHash: ""
Reference: here

apiversion changed after kubectl create

In kubernetes 1.8, when I create a deployment for example
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then when I do a
kubectl get deploy nginx-deployment -o yaml
I got
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-01-24T01:01:01Z
....
Why the apiversion is extension/v1beta1 instead of apiVersion: apps/v1beta2
When you create a deployment, the apiserver persists it and is capable of converting the persisted deployment into any supported version.
kubectl get deployments actually requests the extensions/v1beta1 version (you can see this by adding --v=6)
To get apps/v1beta2 deployments, do kubectl get deployments.v1beta2.apps
You might use an old version of kubectl.
If so, please upgrade your kubectl to 1.8, then create the deployment again.