Kubernetes 1.7.7: how to enable Cron Jobs? - kubernetes

I've added —-runtime-config=batch/v2alpha1=true to the kube-apiserver config like so:
... other stuff
command:
- "/hyperkube"
- "apiserver"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
- "--address=0.0.0.0"
- "--allow-privileged"
- "--insecure-port=8080"
- "--secure-port=443"
- "--cloud-provider=azure"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--service-cluster-ip-range=10.0.0.0/16"
- "--etcd-servers=http://127.0.0.1:2379"
- "--etcd-quorum-read=true"
- "--advertise-address=10.240.255.15"
- "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
- "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
- "--client-ca-file=/etc/kubernetes/certs/ca.crt"
- "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
- "--storage-backend=etcd2"
- "--v=4"
- "—-runtime-config=batch/v2alpha1=true"
... etc
but after restarting the master kubectl api-versions still shows only batch/v1, no v2alpha1 to be seen.
$ kubectl api-versions
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
apps/v1beta1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
batch/v1
certificates.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
settings.k8s.io/v1alpha1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Here's my job definition:
kind: CronJob
apiVersion: batch/v2alpha1
metadata:
name: mongo-backup
spec:
schedule: "* */1 * * *"
jobTemplate:
spec:
... etc
And the error I get when I try to create the job:
$ kubectl create -f backup-job.yaml
error: error validating "backup-job.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"batch", Version:"v2alpha1", Kind:"CronJob"}; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl create -f backup-job.yaml --validate=false
error: unable to recognize "backup-job.yaml": no matches for batch/, Kind=CronJob
What else do I need to do?
PS. this is on Azure ACS, I don't think it makes a difference though.

You may use the latest API versions here apiVersion: batch/v1beta1 that should fix the issue.

The latest Kubernetes v1.21 Release Notes states that:
The batch/v2alpha1 CronJob type definitions and clients are deprecated and removed. (#96987, #soltysh) [SIG API
Machinery, Apps, CLI and Testing]
In that version you should use apiVersion: batch/v1, see the example below:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Notice that:
CronJobs was promoted to general availability in Kubernetes v1.21. If
you are using an older version of Kubernetes, please refer to the
documentation for the version of Kubernetes that you are using, so
that you see accurate information. Older Kubernetes versions do not
support the batch/v1 CronJob API.

There is an open issue for that on their github:
https://github.com/kubernetes/kubernetes/issues/51939
I believe there is no other option but wait, right now
i'm actually stuck on the same issue

Related

Openshift - Run pod only for specific time period

I'm new to Openshfit. We are using openshift deployments to deploy our multiple microservice (SpringBoot application). The deployment is done from docker image.
We have a situation that we need to stop one micro service alone from Midnight till morning 5 AM ( due to an external dependency ).
Could someone suggest a way to do this automatically?
I was able to run
oc scale deployment/sampleservice--replicas=0 manually to make number of pods as zero and scale up to 1 manually later.
I'm not sure how to run this command on specific time automatically. The CronJob in Openshift should be able to do this. But not sure how to configure cronjob to execute an OC command.
Any guidance will be of great help
Using a cronjob is a good option.
First, you'll need an image that has the oc command line client available. I'm sure there's a prebuilt one out there somewhere, but since this will be running with privileges in your OpenShift cluster you want something you trust, which probably means building it yourself. I used:
FROM quay.io/centos/centos:8
RUN curl -o /tmp/openshift-client.tar.gz \
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz; \
tar -C /bin -xf /tmp/openshift-client.tar.gz oc kubectl; \
rm -f /tmp/openshift-client.tar.gz
ENTRYPOINT ["/bin/oc"]
In order to handle authentication correctly, you'll need to create a ServiceAccount and then assign it appropriate privileges through a Role and a RoleBinding. I created a ServiceAccount named oc-client-sa:
apiVersion: v1
kind: ServiceAccount
metadata:
name: oc-client-sa
namespace: oc-client-example
A Role named oc-client-role that grants privileges to Pod and Deployment objects:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-role
namespace: oc-client-example
rules:
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- ''
resources:
- pods
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- 'apps'
resources:
- deployments
- deployments/scale
And a RoleBinding that connects the oc-client-sa ServiceAccount
to the oc-client-role Role:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-rolebinding
namespace: oc-client-example
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: oc-client-role
subjects:
- kind: ServiceAccount
name: oc-client-sa
With all this in place, we can write a CronJob like this that will
scale down a deployment at a specific time. Note that we're running
the jobs using the oc-client-sa ServiceAccount we created earlier:
apiVersion: batch/v1
kind: CronJob
metadata:
name: scale-web-down
namespace: oc-client-example
spec:
schedule: "00 00 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: oc-client-sa
restartPolicy: Never
containers:
- image: docker.io/larsks/openshift-client
args:
- scale
- deployment/sampleservice
- --replicas=0
name: oc-scale-down
You would write a similar one to scale things back up at 5AM.
The oc client will automatically use the credentials provided to your pod by Kubernetes because of the serviceAccountName setting.
API
You can use the OC rest api client and write the simple python code which will scale down replicas. Pack this python into a docker image and run it as a cronjob inside the OC cluster.
Simple Curl
Run a simple curl inside the cronjob to scale up & down deployment at a certain time.
Here is a simple Curl to scale the deployment: https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html#Get-apis-apps-v1beta1-namespaces-namespace-deployments-name-scale
API documentation : https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html
CLI
If you don't want to run code as docker image in cronjob of K8s, you can also run the command, in that case, use the docker image inside cronjob, and fire the command
OC-cli : https://hub.docker.com/r/widerin/openshift-cli
Dont forget authentication is required in both cases either API or running a command inside the cronjob.

Kubernetes doesn't remove completed jobs for a Cronjob

Kubernetes doesn't delete a manually created completed job when historylimit is set when using newer versions of kubernetes clients.
mycron.yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: myjob
spec:
schedule: "* * 10 * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Creating cronjob:
kubectl create -f mycron.yaml
Creating job manually:
kubectl create job -n myjob --from=cronjob/hello hello-job
Result:
Job is completed but not removed
NAME COMPLETIONS DURATION AGE
hello-job 1/1 2s 6m
Tested with kubernetes server+client versions of 1.19.3 and 1.20.0
However when I used an older client version (1.15.5) against the server's 1.19/1.20 it worked well.
Comparing the differences while using different client versions:
kubernetes-controller log:
Using client v1.15.5 I have this line in the log (But missing when using client v1.19/1.20):
1 event.go:291] "Event occurred" object="myjob/hello" kind="CronJob" apiVersion="batch/v1beta1" type="Normal" reason="SuccessfulDelete" message="Deleted job hello-job"
Job yaml:
Exactly the same, except the ownerReference part:
For client v1.19/1.20
ownerReferences:
- apiVersion: batch/v1beta1
kind: CronJob
name: hello
uid: bb567067-3bd4-4e5f-9ca2-071010013727
For client v1.15
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: hello
uid: bb567067-3bd4-4e5f-9ca2-071010013727
And that is it. No other informations in the logs, no errors, no warnings ..nothing (checked all the pods logs in kube-system)
Summary:
It seems to be a bug in kubectl client itself but not in kubernetes server. But don't know how to proceed further.
edit:
When I let the cronjob itself to do the job (ie hitting the time in the expression), it will remove the completed job successfully.

Predictable pod name in kubernetes cron job

I have a cron job on kubernetes that I trigger like so for testing purposes:
kubectl create -f src/cronjob.yaml
kubectl create job --from=cronjob/analysis analysis-test
This creates a pod with the name analysis-test-<random-string>. I was wondering if it's possible to omit or make the suffix predictable?
Filtered cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: analysis
labels:
job: analysis
spec:
schedule: "0 0 * * 0"
concurrencyPolicy: "Forbid"
suspend: true
failedJobsHistoryLimit: 3
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: container-name
image: myimage
env:
- name: ENVIRONMENT
value: "DEV"
imagePullPolicy: IfNotPresent
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
As of v1beta1, no you can't, here's the doc regarding cronjob
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
Here's an excerpt from the docs:
When creating the manifest for a CronJob resource, make sure the name you provide is a valid DNS subdomain name. The name must be no longer than 52 characters. This is because the CronJob controller will automatically append 11 characters to the job name provided and there is a constraint that the maximum length of a Job name is no more than 63 characters.
Also here's a reference page to CronJob v1beta1 spec to view the available options config:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#cronjobspec-v1beta1-batch
Digging through the source code a little bit, you can see how the CronJob controller create the Job Resource
https://github.com/kubernetes/kubernetes/blob/v1.20.1/pkg/controller/cronjob/cronjob_controller.go#L327
https://github.com/kubernetes/kubernetes/blob/v1.20.1/pkg/controller/cronjob/utils.go#L219

Check logs for a Kubernetes resource CronJob

I created a CronJob resource in Kubernetes.
I want to check the logs to validate that my crons are run. But not able to find any way to do that. I have gone through the commands but looks like all are for pod resource type.
Also tried following
$ kubectl logs cronjob/<resource_name>
error: cannot get the logs from *v1beta1.CronJob: selector for *v1beta1.CronJob not implemented
Questions:
How to check logs of CronJob Resource type?
If I want this resource to be in specific namespace, how to implement that same?
You need to check the logs of the pods which are created by the cronjob. The pods will be in completed state but you can check logs.
# here you can get the pod_name from the stdout of the cmd `kubectl get pods`
$ kubectl logs -f -n default <pod_name>
For creating a cronjob in a namespace just add namespace in metadata section. The pods will created in that namespace.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: default
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Ideally you should be sending the logs to a log aggregator system such as ELK or Splunk.
In case you create job from cronjob it works like this:
kubectl -n "namespace" logs jobs.batch/<resource_name> --tail 4

ScheduledJobs on Google Container Engine (kubernetes)

Did someone has an experience running scheduled job? Due to the guide, ScheduledJobs available since 1.4 with enabled runtime batch/v2alpha1
So I was ensured with kubectl api-versions command:
autoscaling/v1
batch/v1
batch/v2alpha1
extensions/v1beta1
storage.k8s.io/v1beta1
v1
But when I tried sample template below with command kubectl apply -f job.yaml
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: hello
spec:
schedule: 0/1 * * * ?
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
I got error
error validating "job.yaml": error validating data: couldn't find type: v2alpha1.ScheduledJob; if you choose to ignore these errors, turn validation off with --validate=false
It's possible that feature still not implemented? Or I made some error during template creation?
Thank you in advance.
Okay, I think I resolved this issue. ScheduledJobs is currently in alpha state and Google Container Engine supports this feature only for clusters with additionally enabled APIs. I was able to create such cluster with command:
gcloud alpha container clusters create my-cluster --enable-kubernetes-alpha
As a result now I have limited 30 day cluster with full feature support. I can see scheduled jobs with kubectl get scheduledjobs as well as create new ones with templates.
You can find more info about alpha clusters here.