So I have a kubernetes cronjob object set to run periodically.
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
ticketing-job-lifetime-manager 45 */4 * * * False 0 174m 25d
and I know how to call it manually:
# ticketing-job-manual-call will be the name of the job that runs
kubectl create job --from=cronjobs/ticketing-job-lifetime-manager ticketing-job-manual-call
BUT - what I want to do is call the job, but modify portions of it (shown below) before it is called. Specifically items.metadata.annotations and items.spec.jobTemplate.spec.containers.args.
If this is possible on-the-fly, I'd be over the moon. If it requires creating a temporary object, then I'd appreciate an approach to doing this that is robust, performant - and safe. Thanks!
apiVersion: v1
items:
- apiVersion: batch/v1
kind: CronJob
metadata:
annotations:
<annotation-1> <- want to modify these
<annotation-2>
..
<annotation-n>
creationTimestamp: "2022-05-03T13:24:49Z"
labels:
AccountID: foo
FooServiceAction: "true"
FooServiceManaged: "true"
CronName: foo
name: foo
namespace: my-namespace
resourceVersion: "298013999"
uid: 57b2-4612-88ef-a0d5e26c8
spec:
concurrencyPolicy: Replace
jobTemplate:
metadata:
annotations:
<annotation-1> <- want to modify these
<annotation-2>
..
<annotation-n>
creationTimestamp: null
labels:
AccountID: 7761777c38d93b
TicketServiceAction: "true"
TicketServiceManaged: "true"
CronName: ticketing-actions-7761777c38d93b-0
name: ticketing-actions-7761777c38d93b-0
namespace: rias
spec:
containers:
- args:
- --accountid=something <- want to modify these
- --faultzone=something
- --type=something
- --cronjobname=something
- --plans=something
command:
- ./ticketing-job
env:
- name: FOO_BAR <- may want to modify these
value: "false"
- name: FOO_BAZ
value: "true"
The way to think about this is that Kubernetes resources are defined (definitively) by YAML|JSON config files. A useful advantage to having config files is that these can be checked into source control; you automatically audit your work if you create unique files for each resource (for every change).
Kubernetes (kubectl) isn't optimized|designed to tweak Resources although you can use kubectl patch to update deployed Resources.
I encourage you to consider a better approach that is applicable to any Kubernetes resource (not just Job's) and this focuses on use YAML|JSON files as the way to represent state:
kubectl get the resource and output it as YAML|JSON (--output=json|yaml) persisting the result to a file (that could be source-controlled)
Mutate the file using any of many tools but preferably YAML|JSON processing tools (e.g. yq or jq)
kubectl create or kubectl apply the file that results that reflects the intended configuration of the new resource.
By way of example, assuming you use jq:
# Output 'ticketing-job-lifetime-manage' as a JSON file
kubectl get job/ticketing-job-lifetime-manage \
--namespace=${NAMESPACE} \
--output=json > ${PWD}/ticketing-job-lifetime-manage.json
# E.g. replace '.metadata.annotations' entirely
jq '.metadata.annotations=[{"foo":"x"},{"bar":"y"}]' \
${PWD}/${PWD}/ticketing-job-lifetime-manage.json \
> ${PWD}/${PWD}/new-job.json
# E.g. replace a specific container 'foo' specific 'args' key with value
jq '.spec.jobTemplate.spec.containers[]|select(.name=="foo").args["--key"]="value" \
${PWD}/${PWD}/new-job.json \
> ${PWD}/${PWD}/new-job.json
# Etc.
# Apply
kubectl create \
--filename=${PWD}/new-job.json \
--namespace=${NAMESPACE}
NOTE You can pipe the output from the kubectl get through jq and into kubectl create if you wish but it's useful to keep a file-based record of the resource.
Having to deal with YAML|JSON config file is a common issue with Kubernetes (and every other technology that uses them). There are other tools e.g. jsonnet and CUE that try to provide a more programmatic way to manage YAML|JSON.
Related
I would like to benchmark my system by creating many pods running the same container. I'm using the following example:
apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1 # requesting 1 GPU
How can I run this YAML file multiple times such that a new pod with a different name is created?
Creating several similar objects to benchmark some component in Kubernetes, I would either sed the resources name from some file/template, eg:
#!/bin/sh
# make sure my-bench.yaml resource names are set to/based on PLACEHOLDER_NAME
for count in $(seq 1 10)
do
sed "s|PLACEHOLDER_NAME|bench-$count|" my-bench.yaml | kubectl apply -f-
done
This can be useful when you have lots of objects, keeping your script readable.
When I don't have a lot of yaml to write, I would just use cat, eg:
#!/bin/sh
for count in $(seq 1 10)
do
cat <<EOF | kubectl apply -f-
apiVersion: v1
kind: Pod
metadata:
name: bench-$count
namespace: my-bench-ns
spec:
...
EOF
done
While as suggested by #replicaSets and #karan525: when working with a deployment/replicaset/... you should be able to scale out, adding replicas.
You can use replicas in spec. All pods created will have different names but the container in every one of them will be the same
Your setup is running a bare Kubernetes Pod. This is unusual for a couple of reasons, one of the key ones being that there is only one of it.
In practice you almost always use one of the higher-level objects; most often a Deployment, but occasionally a StatefulSet (if you need persistence or a fixed, ordered naming for the Pods) or a Job (for things that need to run once and then exit).
Both Deployments and StatefulSets support replicas: which do exactly what you want, run multiple identical copies of a Pod. You'll frequently want this for resiliency if nothing else.
apiVersion: apps/v1 # <-- matches kind:
kind: Deployment # <-- not Pod
metadata:
name: cuda-vector-add
spec:
replicas: 3 # <-- add
template:
spec: # <-- same as your existing Pod spec
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1
You can use imperative commands like kubectl scale to dynamically change the replicas: value.
Is this a valid imperative command for creating job?
kubectl create job my-job --image=busybox
I see this in https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands. But the command is not working. I am getting error as bellow:
Error: unknown flag: --image
What is the correct imperative command for creating job?
Try this one
kubectl create cronjob my-job --schedule="0,15,30,45 * * * *" --image=busy-box
What you have should work, though it not recommended as an approach anymore. I would check what version of kubectl you have, and possibly upgrade it if you aren't using the latest.
That said, the more common approach these days is to write a YAML file containing the Job definition and then run kubectl apply -f myjob.yaml or similar. This file-driven approach allowed for more natural version control, editing, review, etc.
Using correct value for --restart field on "kubectl run" will result run command to create an deployment or job or cronjob
--restart='Always': The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to 'Always'
a deployment is created, if set to 'OnFailure' a job is created, if set to 'Never', a regular pod is created. For the
latter two --replicas must be 1. Default 'Always', for CronJobs `Never`.
Use "kubectl run" for creating basic kubernetes job using imperatively command as below
master $ kubectl run nginx --image=nginx --restart=OnFailure --dry-run -o yaml > output.yaml
Above should result an "output.yaml" as below example, you can edit this yaml for advance configurations as needed and create job by "kubectl create -f output.yaml or if you just need basic job then remove --dry-run option from above command and you will get basic job created.
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
restartPolicy: OnFailure
status: {}
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig
I have only seen example assigning one securityPolicy but I want to assign multiple ones.
I created the following backend config with 2 policies and applied to my service with beta.cloud.google.com/backend-config: my-backend-config
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
namespace: cloud-armor-how-to
name: my-backend-config
spec:
securityPolicy:
name: "policy-one"
name: "policy-two"
When I deploy only "policy-two" is applied. Can I assign two policies somehow? I see no docs for this
There's nothing in the docs that says that you can specify more than one policy. Even the spec says securityPolicy the singular and the YAML structure is not an array.
Furthermore, if you look at your spec:
spec:
securityPolicy:
name: "policy-one"
name: "policy-two"
The YAML standard completely ignores the first name: "policy-one" which explains why only name: "policy-two" is used. You can check it on YAMLlint. To have one more value on your YAML you would have to convert securityPolicy to an array. Something like this:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
namespace: cloud-armor-how-to
name: my-backend-config
spec:
securityPolicy:
- name: "policy-one"
- name: "policy-two"
The issue with this is that it's probably not supported by GCP.
This same behavior happens to the regular HTTP(S) Load Balancers. It looks like it's only possible to add only a single Security Policy per target and the same behavior affects the HTTP(S) load Balancers created by the GKE ingress.
It's possible to add more rules for that only security policy. The new rules can be added in the same way as the first rule was added; however, the priorities of these rules must be different like in the example below:
~$ gcloud beta compute security-policies rules create 1000 \
--security-policy ca-how-to-security-policy \
--src-ip-ranges "192.0.2.0/24" \
--action "deny-404"
~$ gcloud beta compute security-policies rules create 1001 \
--security-policy ca-how-to-security-policy \
--src-ip-ranges "11.16.0.0/24" \
--action "deny-404"
I am facing a weird behaviour with kubectl and --dry-run.
To simplify let's say that I have the following yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginxsdf
imagePullPolicy: Always
name: nginx
Modifying for example the image or the number of replicas:
kubectl apply -f Deployment.yaml -o yaml --dry-run outputs me the resource having the OLD specifications
kubectl apply -f Deployment.yaml -o yaml outputs me the resource having NEW specifications
According to the documentation:
--dry-run=false: If true, only print the object that would be sent, without sending it.
However the object printed is the old one and not the one that will be sent to the ApiServer
Tested on minikube, gke v1.10.0
In the meanwhile I opened a new gitHub issue for it:
https://github.com/kubernetes/kubernetes/issues/72644
I got the following answer in the kubernetes issue page:
When updating existing objects, kubectl apply doesn't send an entire object, just a patch. It is not exactly correct to print either the existing object or the new object in dry-run mode... the outcome of the merge is what should be printed.
For kubectl to be able to accurately reflect the result of the apply, it would need to have the server-side apply logic clientside, which is a non-goal.
Current efforts are directed at moving apply logic to the server. As part of that, the ability to dry-run server-side has been added. kubectl apply --server-dry-run will do what you want, printing the result of the apply merge, without actually persisting it.
#apelisse we should probably update the flag help for apply and possibly print a warning when using --dry-run when updating an object via apply to document the limitations of --dry-run and direct people to use --server-dry-run
The latest version of the client uses:
kubectl apply -f Deployment.yaml --dry-run=server
I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.