Trying example Cronjob from Kubnetes with errors - kubernetes

I am trying to use the example cronjob that is explained through Kubernetes documentation here. However, when I check it on Lens (a tool to display Kubernetes info), I receive an error upon creating a pod. The only difference between the Kubernetes example and my code is I added a namespace since I do not own the server I am working on. Any help is appreciated. Below is my error and yaml file.
Error creating: pods "hello-27928364--1-ftzjb" is forbidden: exceeded quota: test-rq, requested: limits.cpu=16,limits.memory=64Gi,requests.cpu=16,requests.memory=64Gi, used: limits.cpu=1,limits.memory=2G,requests.cpu=1,requests.memory=2G, limited: limits.cpu=12,limits.memory=24Gi,requests.cpu=12,requests.memory=24Gi
This is my yaml file that I apply.
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
namespace: test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

Your namespace seems to have a quota configured. Try to configure the resources on your CronJob, for example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
namespace: test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- c
- date; echo Hello from the Kubernetes cluster
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
restartPolicy: OnFailure
Note the resources: and it's indentation.

Related

How to execute script shell in Kubernetes cronjob

I would like to run a shell script inside the Kubernetes using CronJob, here is my CronJon.yaml file :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- /home/admin_/test.sh
restartPolicy: OnFailure
CronJob has been created ( kubectl apply -f CronJob.yaml )
when I get the list of cronjob I can see the cron job ( kubectl get cj ) and when I run "kubectl get pods" I can see the pod is being created, but pod crashes.
Can anyone help me to learn how I can create a CronJob inside the Kubernetes please ?
As correctly pointed out in the comments, you need to provide the script file in order to execute it via your CronJob. You can do that by mounting the file within a volume. For example, your CronJob could look like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- /myscript/test.sh
volumeMounts:
- name: script-dir
mountPath: /myscript
restartPolicy: OnFailure
volumes:
- name: script-dir
hostPath:
path: /path/to/my/script/dir
type: Directory
Example above shows how to use the hostPath type of volume in order to mount the script file.

Kubernetes Deployment object environment variables not present in container

I am currently learning Kubernetes, and i am facing a bit of a wall.
I try to pass environmentalvariables from my YAML file definition
to my container. But the variables seem not to be present afterwards.
kubectl exec <pod name> -- printenv gives me the list of environmental
variables. But the ones i defined in my YAML file is not present.
I defined the environment variables in my deployment as shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-boot
labels:
app: hello-world-boot
spec:
selector:
matchLabels:
app: hello-world-boot
template:
metadata:
labels:
app: hello-world-boot
containers:
- name: hello-world-boot
image: lightmaze/hello-world-spring:latest
env:
- name: HELLO
value: "Hello there"
- name: WORLD
value: "to the entire world"
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
selector:
app: hello-world-boot
Hopefully someone can see where i failed in the YAML :)
If I correct the errors in your Deployment configuration so that it looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-boot
labels:
app: hello-world-boot
spec:
selector:
matchLabels:
app: hello-world-boot
template:
metadata:
labels:
app: hello-world-boot
spec:
containers:
- name: hello-world-boot
image: lightmaze/hello-world-spring:latest
env:
- name: HELLO
value: "Hello there"
- name: WORLD
value: "to the entire world"
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
And deploy it into my local minikube instance:
$ kubectl apply -f pod.yml
Then it seems to work as you intended:
$ kubectl exec -it hello-world-boot-7568c4d7b5-ltbbr -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin
HOSTNAME=hello-world-boot-7568c4d7b5-ltbbr
TERM=xterm
HELLO=Hello there
WORLD=to the entire world
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
LANG=C.UTF-8
JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk
JAVA_VERSION=8u212
JAVA_ALPINE_VERSION=8.212.04-r0
HOME=/home/spring
If you look at the above output, you can see both the HELLO and WORLD environment variables you defined in your Deployment.

Trying to create a simple CronJob

$ kubectl api-versions | grep batch
batch/v1
batch/v1beta1
When attempting to create this CronJob object which has a single container and an empty volume, I get this error:
$ kubectl apply -f test.yaml
error: error parsing test.yaml: error converting YAML to JSON: yaml: line 19: did not find expected key
The YAML
$ cat test.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: dummy
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: app
image: alpine
command:
- echo
- Hello World!
volumeMounts:
- mountPath: /data
name: foo
restartPolicy: OnFailure
volumes:
- name: foo
emptyDir: {}
Based on my reading of the API, I believe my schema is legit. Any ideas or help would be greatly appreciated.
I think it's indentation issue. Below yaml should work.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: dummy
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: app
image: alpine
command:
- echo
- Hello World!
volumeMounts:
- mountPath: /data
name: foo
restartPolicy: OnFailure
volumes:
- name: foo
emptyDir: {}

Deploy pods in different nodes

I have a namespace called airflow that has 2 pods: webserver and scheduler. I want to deploy scheduler on node A and webserver on node B.
And here you can see deployment files:
scheduler:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: airflow
name: airflow-scheduler
labels:
name: airflow-scheduler
spec:
replicas: 1
template:
metadata:
labels:
app: airflow-scheduler
spec:
terminationGracePeriodSeconds: 60
containers:
- name: scheduler
image: 123423.dkr.ecr.us-east-1.amazonaws.com/airflow:$COMMIT_SHA1
volumeMounts:
- name: logs
mountPath: /logs
command: ["airflow"]
args: ["scheduler"]
imagePullPolicy: Always
resources:
limits:
memory: "3072Mi"
requests:
cpu: "500m"
memory: "2048Mi"
volumes:
- name: logs
persistentVolumeClaim:
claimName: logs
webserver:
apiVersion: v1
kind: Service
metadata:
name: airflow-webserver
namespace: airflow
labels:
run: airflow-webserver
spec:
ports:
- port: 80
targetPort: 8080
selector:
run: airflow-webserver
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: airflow-webserver
namespace: airflow
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- airflow.awesome.com.br
secretName: airflow-crt
rules:
- host: airflow.awesome.com.br
http:
paths:
- path: /
backend:
serviceName: airflow-webserver
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: airflow
name: airflow-webserver
labels:
run: airflow-webserver
spec:
replicas: 1
template:
metadata:
labels:
run: airflow-webserver
spec:
terminationGracePeriodSeconds: 60
containers:
- name: webserver
image: 123423.dkr.ecr.us-east-1.amazonaws.com/airflow:$COMMIT_SHA1
volumeMounts:
- name: logs
mountPath: /logs
ports:
- containerPort: 8080
command: ["airflow"]
args: ["webserver"]
imagePullPolicy: Always
resources:
limits:
cpu: "200m"
memory: "3072Mi"
requests:
cpu: "100m"
memory: "2048Mi"
volumes:
- name: logs
persistentVolumeClaim:
claimName: logs
What's the proper way to ensure that pods will be deployed on different nodes?
edit1:
antiaffinity is not working:
I've tried to set podAntiAffinity on scheduler but it's not working:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: name
operator: In
values:
- airflow-webserver
topologyKey: "kubernetes.io/hostname"
If you want to have these pods run on different nodes but you don't care about which nodes exactly, you can use the Pod anti-affinity feature. It basically defines that the pod X should not run on the same node (it can be also used with failure domain / zones etc., not just with nodes) as pod Y and uses labels to specify the pods. So you will need to add some labels and specify them in the spec sections. More info about it is in Kube docs.
If in addition you want to also specify on which node it should run, you can use the Node affinity feature. See Kube docs for more details.

Setting concurrency Kubernetes Cronjob

This is a pretty basic question that I cannot seem to find an answer to, but I cannot figure out how to set the concurrencyPolicy in a cronjob. I have tried variations of my current file config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: job-newspaper
spec:
schedule: "* */3 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job-newspaper
image: bdsdev.azurecr.io/job-newspaper:latest
imagePullPolicy: Always
resources:
limits:
cpu: "2048m"
memory: "10G"
requests:
cpu: "512m"
memory: "2G"
command: ["spark-submit","/app/newspaper_job.py"]
restartPolicy: OnFailure
concurrencyPolicy: Forbid
When I run kubectl create -f ./job.yaml I get the following error:
error: error validating "./job.yaml": error validating data:
ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown
field "concurrencyPolicy" in io.k8s.api.core.v1.PodSpec; if you choose
to ignore these errors, turn validation off with --validate=false
I am probably either putting this property in the wrong place or calling it the wrong name, I just cannot find it in documentation. Thanks!
The property concurrencyPolicy is part of the CronJob spec, not the PodSpec. You can locally see the spec for a given object using kubectl explain, like
kubectl explain --api-version="batch/v1beta1" cronjobs.spec
There you can see the structure/spec of the CronJob object, which in your case should be
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: job-newspaper
spec:
schedule: "* */3 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: job-newspaper
image: bdsdev.azurecr.io/job-newspaper:latest
imagePullPolicy: Always
resources:
limits:
cpu: "2048m"
memory: "10G"
requests:
cpu: "512m"
memory: "2G"
command: ["spark-submit","/app/newspaper_job.py"]
restartPolicy: OnFailure