I have an yaml. I want to parameterize the schedule of that kubernetes cronjob. On environment file I declared JobFrequencyInMinutes: "10"
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scheduled-mongo-cronjob
spec:
schedule: "*/$(JobFrequencyInMinutes) * * * *"
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
template:
spec:
containers:
- name: scheduled-mongo-cronjob
image: xxxx
env:
- name: JobFrequencyInMinutes
valueFrom:
configMapKeyRef:
key: JobFrequencyInMinutes
name: env-conf
When I am applying the above yaml I am getting an error.
The CronJob "scheduled-mongo-cronjob" is invalid: spec.schedule: Invalid value: "*/$(JobFrequencyInMinutes) * * * *": Failed to parse int from $(JobFrequencyInMinutes): strconv.Atoi: parsing "$(JobFrequencyInMinutes)": invalid syntax
Please guide me if there is any alternative way to achieve this.
The issue here is that the environment variable will be just available when the CronJob is created and inside the job itself, but it is failing to create because the variable $JobFrequencyInMinutes does not exists in the node level.
I would say that to achieve what you are trying to do, you would need to have an environment variable at cluster level. Whenever you want to update your schedule, you would need to set a new value to it and then re-create your CronJob.
It seems though that the declarative way it's not working (via your YAML), so you would need to create using the imperative way:
kubectl run scheduled-mongo-cronjob --schedule="*/$JobFrequencyInMinutes * * * *" --restart=OnFailure --image=xxxx
Related
When deploying NebulaGraph in binary packages (RPM/DEB), I could leverage the logrotate from OS, which is a basic expectation/solution for cleaning up the logs generated.
While in K8s deployment, there is no such layer at the OS level anymore, what is the state-of-the-art thing I should do? or it's a missing piece in Nebula-Operator?
I think we could attach log dir to a pod running logrotate, too, but it looks not elegant to me(or I am wrong?).
After some study, I think the best way could be to leverage what K8s Conjob API could provide.
We could create it like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: log-cleanup
spec:
schedule: "0 0 * * *" # run the job every day at midnight
jobTemplate:
spec:
template:
spec:
containers:
- name: log-cleanup
image: your-log-cleanup-image:latest
command: ["/bin/sh", "-c", "./cleanup.sh /path/to/log"]
restartPolicy: OnFailure
And in /cleanup.sh we could either simple put the log removing logic or log archiving logic(say move them to s3)
I'm new to Openshfit. We are using openshift deployments to deploy our multiple microservice (SpringBoot application). The deployment is done from docker image.
We have a situation that we need to stop one micro service alone from Midnight till morning 5 AM ( due to an external dependency ).
Could someone suggest a way to do this automatically?
I was able to run
oc scale deployment/sampleservice--replicas=0 manually to make number of pods as zero and scale up to 1 manually later.
I'm not sure how to run this command on specific time automatically. The CronJob in Openshift should be able to do this. But not sure how to configure cronjob to execute an OC command.
Any guidance will be of great help
Using a cronjob is a good option.
First, you'll need an image that has the oc command line client available. I'm sure there's a prebuilt one out there somewhere, but since this will be running with privileges in your OpenShift cluster you want something you trust, which probably means building it yourself. I used:
FROM quay.io/centos/centos:8
RUN curl -o /tmp/openshift-client.tar.gz \
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz; \
tar -C /bin -xf /tmp/openshift-client.tar.gz oc kubectl; \
rm -f /tmp/openshift-client.tar.gz
ENTRYPOINT ["/bin/oc"]
In order to handle authentication correctly, you'll need to create a ServiceAccount and then assign it appropriate privileges through a Role and a RoleBinding. I created a ServiceAccount named oc-client-sa:
apiVersion: v1
kind: ServiceAccount
metadata:
name: oc-client-sa
namespace: oc-client-example
A Role named oc-client-role that grants privileges to Pod and Deployment objects:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-role
namespace: oc-client-example
rules:
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- ''
resources:
- pods
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- 'apps'
resources:
- deployments
- deployments/scale
And a RoleBinding that connects the oc-client-sa ServiceAccount
to the oc-client-role Role:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-rolebinding
namespace: oc-client-example
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: oc-client-role
subjects:
- kind: ServiceAccount
name: oc-client-sa
With all this in place, we can write a CronJob like this that will
scale down a deployment at a specific time. Note that we're running
the jobs using the oc-client-sa ServiceAccount we created earlier:
apiVersion: batch/v1
kind: CronJob
metadata:
name: scale-web-down
namespace: oc-client-example
spec:
schedule: "00 00 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: oc-client-sa
restartPolicy: Never
containers:
- image: docker.io/larsks/openshift-client
args:
- scale
- deployment/sampleservice
- --replicas=0
name: oc-scale-down
You would write a similar one to scale things back up at 5AM.
The oc client will automatically use the credentials provided to your pod by Kubernetes because of the serviceAccountName setting.
API
You can use the OC rest api client and write the simple python code which will scale down replicas. Pack this python into a docker image and run it as a cronjob inside the OC cluster.
Simple Curl
Run a simple curl inside the cronjob to scale up & down deployment at a certain time.
Here is a simple Curl to scale the deployment: https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html#Get-apis-apps-v1beta1-namespaces-namespace-deployments-name-scale
API documentation : https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html
CLI
If you don't want to run code as docker image in cronjob of K8s, you can also run the command, in that case, use the docker image inside cronjob, and fire the command
OC-cli : https://hub.docker.com/r/widerin/openshift-cli
Dont forget authentication is required in both cases either API or running a command inside the cronjob.
I have a cron job on kubernetes that I trigger like so for testing purposes:
kubectl create -f src/cronjob.yaml
kubectl create job --from=cronjob/analysis analysis-test
This creates a pod with the name analysis-test-<random-string>. I was wondering if it's possible to omit or make the suffix predictable?
Filtered cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: analysis
labels:
job: analysis
spec:
schedule: "0 0 * * 0"
concurrencyPolicy: "Forbid"
suspend: true
failedJobsHistoryLimit: 3
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: container-name
image: myimage
env:
- name: ENVIRONMENT
value: "DEV"
imagePullPolicy: IfNotPresent
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
As of v1beta1, no you can't, here's the doc regarding cronjob
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
Here's an excerpt from the docs:
When creating the manifest for a CronJob resource, make sure the name you provide is a valid DNS subdomain name. The name must be no longer than 52 characters. This is because the CronJob controller will automatically append 11 characters to the job name provided and there is a constraint that the maximum length of a Job name is no more than 63 characters.
Also here's a reference page to CronJob v1beta1 spec to view the available options config:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#cronjobspec-v1beta1-batch
Digging through the source code a little bit, you can see how the CronJob controller create the Job Resource
https://github.com/kubernetes/kubernetes/blob/v1.20.1/pkg/controller/cronjob/cronjob_controller.go#L327
https://github.com/kubernetes/kubernetes/blob/v1.20.1/pkg/controller/cronjob/utils.go#L219
I want to run one cron at different times.
Is it possible to do something like this in my YML file:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule:
- "*/10 00-08 * * *"
- "*/5 09-18 * * *"
- "*/10 19-23 * * *"
concurrencyPolicy: Forbid
...
or do I have to create separate YML files for every schedule time?
The short answer is: no, you cannot create one CronJob YML with several crontab times schedules.
The easy solution would be to use separate CronJob resource for each crontab line from your example. You can use the same image for each of your CronJobs.
I've added —-runtime-config=batch/v2alpha1=true to the kube-apiserver config like so:
... other stuff
command:
- "/hyperkube"
- "apiserver"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
- "--address=0.0.0.0"
- "--allow-privileged"
- "--insecure-port=8080"
- "--secure-port=443"
- "--cloud-provider=azure"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--service-cluster-ip-range=10.0.0.0/16"
- "--etcd-servers=http://127.0.0.1:2379"
- "--etcd-quorum-read=true"
- "--advertise-address=10.240.255.15"
- "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
- "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
- "--client-ca-file=/etc/kubernetes/certs/ca.crt"
- "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
- "--storage-backend=etcd2"
- "--v=4"
- "—-runtime-config=batch/v2alpha1=true"
... etc
but after restarting the master kubectl api-versions still shows only batch/v1, no v2alpha1 to be seen.
$ kubectl api-versions
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
apps/v1beta1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
batch/v1
certificates.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
settings.k8s.io/v1alpha1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Here's my job definition:
kind: CronJob
apiVersion: batch/v2alpha1
metadata:
name: mongo-backup
spec:
schedule: "* */1 * * *"
jobTemplate:
spec:
... etc
And the error I get when I try to create the job:
$ kubectl create -f backup-job.yaml
error: error validating "backup-job.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"batch", Version:"v2alpha1", Kind:"CronJob"}; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl create -f backup-job.yaml --validate=false
error: unable to recognize "backup-job.yaml": no matches for batch/, Kind=CronJob
What else do I need to do?
PS. this is on Azure ACS, I don't think it makes a difference though.
You may use the latest API versions here apiVersion: batch/v1beta1 that should fix the issue.
The latest Kubernetes v1.21 Release Notes states that:
The batch/v2alpha1 CronJob type definitions and clients are deprecated and removed. (#96987, #soltysh) [SIG API
Machinery, Apps, CLI and Testing]
In that version you should use apiVersion: batch/v1, see the example below:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Notice that:
CronJobs was promoted to general availability in Kubernetes v1.21. If
you are using an older version of Kubernetes, please refer to the
documentation for the version of Kubernetes that you are using, so
that you see accurate information. Older Kubernetes versions do not
support the batch/v1 CronJob API.
There is an open issue for that on their github:
https://github.com/kubernetes/kubernetes/issues/51939
I believe there is no other option but wait, right now
i'm actually stuck on the same issue