Use variable in a patchesJson6902 of a kustomization.yaml file - kubernetes

I would like to set the name field in a Namespace resource and also replace the namespace field in a Deployment resource with the same value, for example my-namespace.
Here is kustomization.json:
namespace: <NAMESPACE>
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/name
value: <NAMESPACE>
target:
kind: Namespace
name: system
version: v1
resources:
- manager.yaml
and manager.yaml:
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
I tried using kustomize edit set namespace my-namespace && kustomize build, but it only changes the namespace field in the Deployment object.
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?

Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
I managed to achieve somewhat similar with the following configuration:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
depyloment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
And here is the output of the command that you used:
➜ kustomize kustomize edit set namespace my-namespace7 && kustomize build .
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace7
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: my-namespace7
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
What is happening here is that once you set the namespace globally in kustomization.yaml it will apply it to your targets which looks to me that looks an easier way to achieve what you want.
I cannot test your config without manager_patch.yaml content. If you wish to go with your way further you will have update the question with the file content.

Related

Kubernetes: Is there a way to retrieve or inject local env vars into configmap.yaml? [duplicate]

I am setting up the kubernetes setup for django webapp.
I am passing environment variable while creating deployment as below
kubectl create -f deployment.yml -l key1=value1
I am getting error as below
error: no objects passed to create
Able to create the deployment successfully, If i remove the env variable -l key1=value1 while creating deployment.
deployment.yaml as below
#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: sigma-service
name: $key1
What will be the reason for causing the above error while creating deployment?
I used envsubst (https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html) for this. Create a deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: $NAME
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then:
export NAME=my-test-nginx
envsubst < deployment.yaml | kubectl apply -f -
Not sure what OS are you using to run this. On macOS, envsubst installed like:
brew install gettext
brew link --force gettext
This isn't a right way to use the deployment, you can't provide half details in yaml and half in kubectl commands. If you want to pass environment variables in your deployment you should add those detail in the deployment spec.template.spec:
You should add following block to your deployment.yaml
spec:
containers:
- env:
- name: var1
value: val1
This will export your environment variables inside the container.
The other way to export the environment variable is use kubectl run (not advisable) as it is going to be depreciated very soon. You can use following command:
kubectl run nginx --image=nginx --restart=Always --replicas=1 --env=var1=val1
The above command will create a deployment nginx with replica 1 and environment variable var1=val1
You cannot pass variables to "kubectl create -f". YAML files should be complete manifests without variables. Also you cannot use "-l" flag to "kubectl create -f".
If you want to pass environment variables to pod you should do like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
env:
- name: MY_VAT
value: MY_VALUE
ports:
- containerPort: 80
Read more here: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Follow the below steps
create test-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
using sed command you can update the deployment name at deployment time
sed -e 's|MYAPP|my-nginx|g' test-deploy.yaml | kubectl apply -f -
File: ./deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
File: ./service.yaml
apiVersion: v1
kind: Service
metadata:
name: MYAPP
labels:
app: nginx
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: nginx
File: ./kustomization.yaml
resources:
- deployment.yaml
- service.yaml
If you're using https://kustomize.io/, you can do this trick in a CI:
sh '( echo "images:" ; echo " - name: $IMAGE" ; echo " newTag: $VERSION" ) >> ./kustomization.yaml'
sh "kubectl apply --kustomize ."
I chose yq since it is yaml aware and gives a precise control where text substitutions happen.
To set an image from bash env var:
export IMAGE="your_image:latest"
yq eval '.spec.template.spec.containers[0].image = "'$IMAGE'"' manifests/daemonset.yaml | kubectl apply -f -
yq is available on MacPorts (as of 19/04/21 v4.4.1) with sudo port install yq
I was facing the same problem. I created a python script to change simple/complex or add values to the YAML file.
This became very handy in a similar situation that you describe. Also, switching to the python domain can allow for more complex scenarios.
The code and how to use it are available at this gist.
https://gist.github.com/washraf/f81153270c80b0b4ecf90a53872abde7
Please try following
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kdpd00201
name: frontend
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: frontend
image: ifccncf/nginx:1.14.2
ports:
- containerPort: 8001
env:
- name: NGINX_PORT
value: "8001"
My solution is then
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
namespace: kdpd00201
spec:
replicas: 4
selector:
matchLabels:
app: frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: frontend
spec:
containers:
- env: # modified level
- name: NGINX_PORT
value: "8080"
image: lfccncf/nginx:1.13.7
name: nginx
ports:
- containerPort: 8080

configmap change doesn't reflect automatically on respective pods

apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3 # tells deployment to run 3 pods matching the template
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice1
spec:
containers:
- name: consoleservice
image: chintamani/insightvu:ms-console1
readinessProbe:
httpGet:
path: /
port: 8385
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /deploy/config
name: config
volumes:
- name: config
configMap:
name: console-config
For creating configmap I am using this command:
kubectl create configmap console-config --from-file=deploy/config
While changing in configmap it doesn't reflect automatically, every time I have to restart the pod. How can I do it automatically?
thank you guys .Able to fix it ,I am using reloader to reflect on pods if any changes done inside
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
then add the annotation inside your deployment.yml file .
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
annotations:
configmap.reloader.stakater.com/reload: "console-config"
It will restart your pods gradually .
Pod and configmap are completely separate in Kubernetes and pods don't automatically restart itself if there is a configmap change.
There are few alternatives to achieve this.
Use wave, it's a Kubernetes controller which look for specific annotation and update the deployment if there is any change in configmap https://github.com/pusher/wave
Use of https://github.com/stakater/Reloader, reloader can watch configmap changes and can update the pod to pick the new configuration.
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
spec:
template:
metadata:
You can add a customize configHash annotation in deployment and in CI/CD or while deploying the application use yq to replace that value with hash of configmap, so in case of any change in configmap. Kubernetes will detect the change in annotation of deployment and reload the pods with new configuration.
yq w --inplace deployment.yaml spec.template.metadata.annotations.configHash $(kubectl get cm/configmap -oyaml | sha256sum)
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: application
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3
template:
metadata:
labels:
app: consoleservice1
annotations:
configHash: ""
Reference: here

Update deployment labels using "kubectl patch" does not work

I am trying to update a label using kubectl.
When I use apply it works but it doesn't when doing a patch.
I tried kubectl patch deployment nginx-deployment --patch "$(cat nginx.yaml)"; it returns back no change where I would expect to get back a label change.
These are the only changes on my yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: testLab
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: helloWorld
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 80
Is there a restriction on what patch updates or it am I doing something wrong?
I also tried specifying --type strategic and other types but none seem to work.
After executing command kubectl patch on your second file (where you changed label) you should see following error:
Error from server: cannot restore map from string
After executing command kubectl apply on this file you should get following error :
error: error validating "nginx.yaml": error validating data: ValidationError(Deployment.metadata): unknown field "label" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
Your deployment file should looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: helloWorld
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 8
You missed to add space after app label.
Add space and then execute command kubectl patch deployment nginx-deployment --patch "$(cat nginx.yaml)" once again.
Here are useful documentations: labels-selectors, kubernetes-deployments, kubernetes-patch.
You should be having something like this in your metadata:
metadata:
name: nginx-deployment
labels:
label: testLabel2

View all applied configurations of an object

The command:
kubectl apply view-last-applied -f object.yml
displays the latest applied configuration file of an object.
Does a command exist that gives the entire 'applied' history of a given object?
For example, given the created configuration (using kubectl create -f pod.spec --save-config):
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
and the applied configurations (using kubectl apply -f pod.spec):
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
revision 2:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
the command should give:
$ kubectl appy log -f pod.spec
applied <later date>:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
applied <earlier date>:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
No, only the latest applied configuration is persisted in the object

How to configure a non-default serviceAccount on a deployment

My Understanding of this doc page is, that I can configure service accounts with Pods and hopefully also deployments, so I can access the k8s API in Kubernetes 1.6+. In order not to alter or use the default one I want to create service account and mount certificate into the pods of a deployment.
How do I achieve something similar like in this example for a deployment?
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
As you will need to specify 'podSpec' in Deployment as well, you should be able to configure the service account in the same way. Something like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
# Below is the podSpec.
metadata:
name: ...
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
kubernetes nginx-deployment.yaml where serviceAccountName: test-sa
used as non default service account
Link: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
test-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test-ns
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: test-ns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
serviceAccountName: test-sa
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80