I want to pass arguments to Kubernetes deployment when I use kubectl command to apply the deployment file.
Example: In my deployment .yaml, I have arguments as below and I want to pass the argument values when I run with the kubectl apply - f <my-deployment>.yaml
So, in the below example, I want to override the args - userid and role when I run the above kubectl command.
spec:
containers:
- name: testimage
image: <my image name>:<tag>
args:
- --userid=testuser
- --role=manager
The simple answer is. You can't do that.
kubectl is not a template engine. As some people mentioned, you have options like Helm or Kustomize which can solve this. I'd encurage you to look into Helm3 since it nicely solves your problem with a command like helm upgrade --install ... --set userid=xxx --set role=yyy.
If you're stuck with kubectl only though, you might want to use it's ability to ingest yaml from stdin and pass your yaml through any type of templating first. ie. as follows :
...
args:
- --userid=$USER
- --role=$ROLE
...
cat resource.yaml | USER=testuser ROLE=manager envsubst | kubectl apply -f -
obviously any other string replacement method would do (sed, awk, etc.)
This should be added in your deployment.yml
spec:
containers:
- name: testimage
image: <my image name>:<tag>
args: ["--userid","=","testuser","--role","=","manager"]
Related
Along with the container image in kubernetes, I would like to update the sidecar image as well.
What will be the kubectl command for this process?
kubernetes have the command set image that allow you to update an image to the expected version
the syntax is
kubectl set image deployment/{deployment-name} {container name}:{image:version}
with a sample it look like
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
you can found the documentation of this command here https://kubernetes.io/fr/docs/concepts/workloads/controllers/deployment/#mise-%C3%A0-jour-d-un-d%C3%A9ploiement
Assumed you have a deployment spec look like this:
...
kind: Deployment
metadata:
name: mydeployment
...
spec:
...
template:
...
spec:
...
containers:
- name: application
image: nginx:1.14.0
...
- name: sidecar
image: busybox:3.15.0
...
kubectl set image deployment mydeployment application=nginx:1.16.0 sidecar=busybox:3.18.0
Besides what other users advise to use kubectl set image you can also patch your resource (pod, deployment, etc.).
In Kubernetes Patch documentation you have some examples there:
# Update a container's image; spec.containers[*].name is required because it's a merge key
$ kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
# Disable a deployment livenessProbe using a json patch with positional arrays
kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
Also you can edit your configuration YAML and apply it.
kubectl apply -f <yamlfile with new image>
I have a file for a Job resource, which looks something like below.I need to run multiple instances with this definition with separate arguments for each.
apiVersion: batch/v1
kind: Job
metadata:
generateName: abc-
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: some_secret
restartPolicy: Never
backoffLimit: 4
I can successfully run this job resource with
kubectl create -f my-job.yml
But, I'm not sure how I pass my arguments corresponding to
command:['arg1','arg2']
I think updating the file with my dynamic args for each request is just messy.
I tried kubectl patch -f my-job.yml --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/", "value": {"command": ["arg1","arg2"] } }]', which works well for a Deployment kind but for Job it doesn't work
I tried
sudo kubectl run explicitly-provide-name-which-i-dont-want-to --image=index.docker.io/some/image:latest --restart=Never -- arg1 arg2, but for this I won't be able to pass the imagePullSecrets.
kind of a generic answer here, just trying to guide you. In general what you express is the need to 'parameterize' your kubernetes deployment descriptors. There are different ways some are simple, some others are a bit hacky and finally there is github.com/kubernetes/helm.
Personally I would strongly suggest you go through installing Helm on your cluster and then 'migrate' your job or any vanilla kubernetes deployment descriptor into a helm Chart. This will eventually give you the 'parameterization' power that you need to spin jobs in different ways and with different configs.
But, if this sounds like too much for you, I can recommend something that I was doing before I discover Helm. Using things like 'bash' / 'envsubst' I was eventually - templating manually the parts of the yaml file, with place holders (e.g env variables) and then I was feedind the yaml to tools like 'envsubst' where they were replacing the placeholders with the values from the environment. Ugly? Yes. Maintenable? maybe for a couple of simple examples. Example of envsubst here.
apiVersion: batch/v1
kind: Job
metadata:
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: $SOME_ENV_VALUE
restartPolicy: Never
backoffLimit: 4
Hope that helps..but seriously if you have time, consider checking 'Helm'.
I would also consider sourcing the command arguments from environment variables. These variables are then provided by helm as javapapo has mentioned.
I found this guide via Google:
Start Kubernetes job from command line with parameters
But the helm chart solution suggested by javapopo is the best way I guess.
Here I can update the envs through kubectl patch, then is there any method that can delete envs except re-deploy a deployment.yaml?
$ kubectl patch deployment demo-deployment -p '{"spec":{"template":{"spec":{"containers":[{"name": "demo-deployment","env":[{"name":"foo","value":"bar"}]}]}}}}'
deployment.extensions "demo-deployment" patched
Can I delete the env "foo" through command line not using a re-deploy on the whole deployment?
This is coming late but for newcomers, you can use the following kubectl command to remove an existing env variable from a deployment
kubectl set env deployment/DEPLOYMENT_NAME VARIABLE_NAME-
Do not omit the hyphen (-) at the end
If you are fine with redeployment then follow the below steps
Create configmap and include your environment variables
Load env variables from configmap in the deployment
envFrom:
- configMapRef:
name: app-config
If you want to delete env variable then remove those key-value pairs from configmap
It will cause redeployment. You can also delete the pod from corresponding deployment
Consider that containers is an array inside an object. Arrays can only be fetched by their index, as opposed to objects which can be fetched via key value pairs. See reference here. So there is a workaround for using index.
Here you have env that are placed into the container:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
- name: DSADASD
value: asdsad
Here you have a command to remove the anv using index:
kubectl patch deployments asd --type=json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/env/1"}]
And the result:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
This will still however restart your pod.
Hope that helps!
I have a file for a Job resource, which looks something like below.I need to run multiple instances with this definition with separate arguments for each.
apiVersion: batch/v1
kind: Job
metadata:
generateName: abc-
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: some_secret
restartPolicy: Never
backoffLimit: 4
I can successfully run this job resource with
kubectl create -f my-job.yml
But, I'm not sure how I pass my arguments corresponding to
command:['arg1','arg2']
I think updating the file with my dynamic args for each request is just messy.
I tried kubectl patch -f my-job.yml --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/", "value": {"command": ["arg1","arg2"] } }]', which works well for a Deployment kind but for Job it doesn't work
I tried
sudo kubectl run explicitly-provide-name-which-i-dont-want-to --image=index.docker.io/some/image:latest --restart=Never -- arg1 arg2, but for this I won't be able to pass the imagePullSecrets.
kind of a generic answer here, just trying to guide you. In general what you express is the need to 'parameterize' your kubernetes deployment descriptors. There are different ways some are simple, some others are a bit hacky and finally there is github.com/kubernetes/helm.
Personally I would strongly suggest you go through installing Helm on your cluster and then 'migrate' your job or any vanilla kubernetes deployment descriptor into a helm Chart. This will eventually give you the 'parameterization' power that you need to spin jobs in different ways and with different configs.
But, if this sounds like too much for you, I can recommend something that I was doing before I discover Helm. Using things like 'bash' / 'envsubst' I was eventually - templating manually the parts of the yaml file, with place holders (e.g env variables) and then I was feedind the yaml to tools like 'envsubst' where they were replacing the placeholders with the values from the environment. Ugly? Yes. Maintenable? maybe for a couple of simple examples. Example of envsubst here.
apiVersion: batch/v1
kind: Job
metadata:
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: $SOME_ENV_VALUE
restartPolicy: Never
backoffLimit: 4
Hope that helps..but seriously if you have time, consider checking 'Helm'.
I would also consider sourcing the command arguments from environment variables. These variables are then provided by helm as javapapo has mentioned.
I found this guide via Google:
Start Kubernetes job from command line with parameters
But the helm chart solution suggested by javapopo is the best way I guess.
I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.