This is a pretty basic question that I cannot seem to find an answer to, but I cannot figure out how to set the concurrencyPolicy in a cronjob. I have tried variations of my current file config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: job-newspaper
spec:
schedule: "* */3 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job-newspaper
image: bdsdev.azurecr.io/job-newspaper:latest
imagePullPolicy: Always
resources:
limits:
cpu: "2048m"
memory: "10G"
requests:
cpu: "512m"
memory: "2G"
command: ["spark-submit","/app/newspaper_job.py"]
restartPolicy: OnFailure
concurrencyPolicy: Forbid
When I run kubectl create -f ./job.yaml I get the following error:
error: error validating "./job.yaml": error validating data:
ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown
field "concurrencyPolicy" in io.k8s.api.core.v1.PodSpec; if you choose
to ignore these errors, turn validation off with --validate=false
I am probably either putting this property in the wrong place or calling it the wrong name, I just cannot find it in documentation. Thanks!
The property concurrencyPolicy is part of the CronJob spec, not the PodSpec. You can locally see the spec for a given object using kubectl explain, like
kubectl explain --api-version="batch/v1beta1" cronjobs.spec
There you can see the structure/spec of the CronJob object, which in your case should be
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: job-newspaper
spec:
schedule: "* */3 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: job-newspaper
image: bdsdev.azurecr.io/job-newspaper:latest
imagePullPolicy: Always
resources:
limits:
cpu: "2048m"
memory: "10G"
requests:
cpu: "512m"
memory: "2G"
command: ["spark-submit","/app/newspaper_job.py"]
restartPolicy: OnFailure
Related
I am trying to use the example cronjob that is explained through Kubernetes documentation here. However, when I check it on Lens (a tool to display Kubernetes info), I receive an error upon creating a pod. The only difference between the Kubernetes example and my code is I added a namespace since I do not own the server I am working on. Any help is appreciated. Below is my error and yaml file.
Error creating: pods "hello-27928364--1-ftzjb" is forbidden: exceeded quota: test-rq, requested: limits.cpu=16,limits.memory=64Gi,requests.cpu=16,requests.memory=64Gi, used: limits.cpu=1,limits.memory=2G,requests.cpu=1,requests.memory=2G, limited: limits.cpu=12,limits.memory=24Gi,requests.cpu=12,requests.memory=24Gi
This is my yaml file that I apply.
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
namespace: test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Your namespace seems to have a quota configured. Try to configure the resources on your CronJob, for example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
namespace: test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- c
- date; echo Hello from the Kubernetes cluster
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
restartPolicy: OnFailure
Note the resources: and it's indentation.
I have a bunch of Kubernetes resources (i.e. a lot of yaml files), and I would like to have a result with only certain paths.
My current brutal approach looks like:
cat my-list-of-deployments | yq eval 'select(.kind == "Deployment") \
| del(.metadata.labels, .spec.replicas, .spec.selector, .spec.strategy, .spec.template.metadata) \
| del(.spec.template.spec.containers.[0].env, del(.spec.template.spec.containers.[0].image))' -
Of course this is super inefficient.
In the path .spec.template.spec.containers.[0] I actually want ideally delete anything except: .spec.template.spec.containers.[*].image and .spec.template.spec.containers.[*].resources (where "*" means, keep all array elements).
I tried something like
del(.spec.template.spec.containers.[0] | select(. != "name"))
But this did not work for me. How can I make this better?
Example input:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- image: app-one:0.2.0
name: app-one
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- image: redis:3.2-alpine
livenessProbe:
exec:
command:
- redis-cli
- info
- server
periodSeconds: 20
name: app-two
readinessProbe:
exec:
command:
- redis-cli
- ping
resources:
limits:
cpu: 100m
memory: 128Mi
startupProbe:
periodSeconds: 2
tcpSocket:
port: 6379
Desired output:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- name: app-one
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- name: app-two
resources:
limits:
cpu: 100m
memory: 128Mi
The key is to use the with_entries function inside the .containers array to manually mark the required fields - name, resources and use the |= update operator to put the modified result back
yq eval '
select(.kind == "Deployment").spec.template.spec.containers[] |=
with_entries( select(.key == "name" or .key == "resources") ) ' yaml
I am trying to run a test container through the Kubernetes but I got error. The below is my deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
containers:
requests:
storage: 2Gi
cpu: 0.5
memory: "128M"
limits:
cpu: 0.5
memory: "128M" # This is the line throws error
- type: PersistentVolumeClaim
max:
storage: 2Gi
min:
storage: 1Gi
.
.
.
When I run kubectl apply -k ., I get this error:
error: accumulating resources: accumulation err='accumulating resources from 'deploy.yaml': yaml: line 19: did not find expected key': got file 'deploy.yaml', but '/home/kus/deploy.yaml' must be a directory to be a root
I tried to read the Kubernetes but I cannot understand why I'm getting error.
Edit 1
I changed my deploy.yaml file as #whites11 said, now it's like:
.
.
.
spec:
limits:
memory: "128Mi"
cpu: 0.5
.
.
.
Now I'm getting this error:
resourcequota/storagequota unchanged
service/nginx configured
error: error validating ".": error validating data: ValidationError(Deployment.spec): unknown field "limits" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false
The file you shared is not valid YAML.
limits:
cpu: 0.5
memory: "128M" # This is the line throws error
- type: PersistentVolumeClaim
max:
storage: 2Gi
min:
storage: 1Gi
The limits field has mixed type (hash and collection) and is obviously wrong.
Seems like you took the syntax for specifying pods resource limits and mixed it with the one used to limit storage consuption.
In your Deployment spec you might want to set limits simply like this:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- ...
resources:
limits:
cpu: 0.5
memory: "128M"
And deal with storage limits at the namespace level as described in the documentation.
You should change your yaml file to this:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
resources:
limits:
memory: "128Mi"
cpu: "500m"
I have this yaml for cronjob, running in google kubernetes engine:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: 2019-04-22T18:20:51Z
name: cron-field-velocity-field-details-manager
namespace: master
resourceVersion: "73643714"
selfLink: /apis/batch/v1beta1/namespaces/master/cronjobs/cron-field-velocity-field-details-manager
uid: 5be9e8d5-652b-11e9-bf91-42010a9600af
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
labels:
app: cron-field-velocity-field-details-manager
chart: field-velocity-field-details-manager-0.0.1
heritage: Tiller
release: master-field-velocity-field-details-manager
spec:
containers:
- args:
- ./field-velocity-field-details-manager.dll
command:
- dotnet
image: taranisag/field-velocity-field-details-manager:master.993b179
imagePullPolicy: IfNotPresent
name: cron-field-velocity-field-details-manager
resources:
requests:
cpu: "2"
memory: 2Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regsecret
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: '* 2,14 * * *'
successfulJobsHistoryLimit: 3
suspend: false
status:
lastScheduleTime: 2019-06-20T02:00:00Z
It was working for a few weeks meaning the job was running twice a day, but it stop running a week ago.
There was no indication of an error and the last run was completed successfully
Is it something in the yaml I defined wrong ?
I have a cronjob that sends out emails to customers. It occasionally fails for various reasons. I do not want it to restart, but it still does.
I am running Kubernetes on GKE. To get it to stop, I have to delete the CronJob and then kill all the pods it creates manually.
This is bad, for obvious reasons.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: 2018-06-21T14:48:46Z
name: dailytasks
namespace: default
resourceVersion: "20390223"
selfLink: [redacted]
uid: [redacted]
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- kubernetes/daily_tasks.sh
env:
- name: DB_HOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
envFrom:
- secretRef:
name: my-secrets
image: [redacted]
imagePullPolicy: IfNotPresent
name: dailytasks
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 14 * * *
successfulJobsHistoryLimit: 3
suspend: true
status:
active:
- apiVersion: batch
kind: Job
name: dailytasks-1533218400
namespace: default
resourceVersion: "20383182"
uid: [redacted]
lastScheduleTime: 2018-08-02T14:00:00Z
It turns out that you have to set a backoffLimit: 0 in combination with restartPolicy: Never in combination with concurrencyPolicy: Forbid.
backoffLimit means the number of times it will retry before it is considered failed. The default is 6.
concurrencyPolicy set to Forbid means it will run 0 or 1 times, but not more.
restartPolicy set to Never means it won't restart on failure.
You need to do all 3 of these things, or your cronjob may run more than once.
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
[ADD THIS -->]backoffLimit: 0
template:
... MORE STUFF ...
The kubernetes cronjob resources has a field, suspend in its spec.
You can't do it by default, but if you want to ensure it doesn't run, you could update the script that sends emails and have it patch the cronjob resource to add suspend: true if it fails
Something like this
kubectl patch cronjob <name> -p '{"spec": { "suspend": true }}'