Kubernetes Run job using CronJob - kubernetes

Is there a way through which I can run an existing Job using CronJob resource.
In CronJob Spec template can we apply a selector using labels. Something like this:
Job Spec: (Link to job docs)
apiVersion: batch/v1
kind: Job
label:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
Cron Spec:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi-cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
labelSelector:
name: pi # refer to the job created above
I came across this. I want to try inverse of this.
Create-Job-From-Cronjob

No, you can not do this in the way you want. kubectl only allows you to create jobs based on cronjob, but not vise-versa.
kubectl create job NAME [--image=image --from=cronjob/name] -- [COMMAND] [args...] [flags] [options]
Available commands right now for kubectl create:
clusterrole Create a ClusterRole.
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole
configmap Create a configmap from a local file, directory or literal value
deployment Create a deployment with the specified name.
job Create a job with the specified name.
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name.
priorityclass Create a priorityclass with the specified name.
quota Create a quota with the specified name.
role Create a role with single rule.
rolebinding Create a RoleBinding for a particular Role or ClusterRole
secret Create a secret using specified subcommand
service Create a service using specified subcommand.
serviceaccount Create a service account with the specified name

Related

Define a standalone patch as YAML

I have a need to define a standalone patch as YAML.
More specifically, I want to do the following:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "registry-my-registry"}]}'
The catch is I can't use kubectl patch. I'm using a GitOps workflow with flux, and that resource I want to patch is a default resource created outside of flux.
In other terms, I need to do the same thing as the command above but with kubectl apply only:
kubectl apply patch.yaml
I wasn't able to figure out if you can define such a patch.
The key bit is that I can't predict the name of the default secret token on a new cluster (as the name is random, i.e. default-token-uudge)
Fields set and deleted from Resource Config are merged into Resources by Kubectl apply:
If a Resource already exists, Apply updates the Resources by merging the
local Resource Config into the remote Resources
Fields removed from the Resource Config will be deleted from the remote Resource
You can learn more about Kubernetes Field Merge Semantics.
If your limitation is not knowing the secret default-token-xxxxx name, no problem, just keep that field out of your yaml.
As long as the yaml has enough fields to identify the target resource (name, kind, namespace) it will add/edit the fields you set.
I created a cluster (minikube in this example, but it could be any) and retrieved the current default serviceAccount:
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "330"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
Then, we create a yaml file with the content's that we want to add:
$ cat add-image-pull-secrets.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
imagePullSecrets:
- name: registry-my-registry
Now we apply and verify:
$ kubectl apply -f add-image-pull-secrets.yaml
serviceaccount/default configured
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
imagePullSecrets:
- name: registry-my-registry
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","imagePullSecrets":[{"name":"registry-my-registry2"}],"kind":"ServiceAccount","metadata":{"annotations":{},"name":"default","namespace":"default"}}
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "2382"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
As you can see, the ImagePullPolicy was added to the resource.
I hope it fits your needs. If you have any further questions let me know in the comments.
Let say, your service account YAML looks like bellow:
$ kubectl get sa demo -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
Now, you want to add or change the imagePullSecrets for that service account. To do so, edit the YAML file and add imagePullSecrets.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
imagePullSecrets:
- name: myregistrykey
And finally, apply the changes:
$ kubectl apply -f service-account.yaml

How to run kubectl within a job in a namespace?

Hi I saw this documentation where kubectl can run inside a pod in the default pod.
Is it possible to run kubectl inside a Job resource in a specified namespace?
Did not see any documentation or examples for the same..
When I tried adding serviceAccounts to the container i got the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
This was when i tried sshing into the container and running the kubctl.
Editing the question.....
As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
On running the job, I get the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.
TL;DR:
Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities
Security Considerations:
Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.
Test Environment:
I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
In order to avoid incompatibility, use the kubectl with matching version with your server.
Reproduction:
$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
As you can see, it has succeeded running the job with the custom ServiceAccount.
Let me know if you have further questions about this case.
Create service account like this.
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
Create ClusterRoleBinding using this.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: modify-pods-to-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: internal-kubectl
Now create pod with same config that are given at Documentation.
When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to
create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.
After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: internal-kubectl

Specify CronJob Schedule using config map

I have a CronJob defined in a yam file, deployed to istio as
apiVersion: batch/v1beta1
kind: CronJob
spec:
schedule: "*/12 * * * *"
I want to have different schedules in different environments, so tried to set the schedule from a config map:
apiVersion: batch/v1beta1
kind: CronJob
spec:
schedule:
- valueFrom:
configMapKeyRef:
name: config-name
key: service-schedule
It fails to sync with the error
invalid type for io.k8s.api.batch.v1beta1.CronJobSpec.schedule: got "array", expected "string"
Is it possible to use config map in this way?
ConfigMap is used to set environment variables inside container or is mounted as volume.
I don't think you can use configmap to set schedule in cronjob.

Kubernetes CronJob Not Correctly Using Docker Secret

I have a Kubernetes cluster that I am trying to set up a CronJob on. I have the CronJob set up in the default namespace, with the image pull secret I want to use set up in the default namespace as well. I set the imagePullSecrets in the CronJob to reference the secret I use to pull images from my private docker registry, I can verify this secret is valid because I have deployments in the same cluster and namespace that use this secret to successfully pull docker images. However, when the CronJob pod starts, I see the following error:
no basic auth credentials
I understand this happens when the pod doesn't have credentials to pull the image from the docker registry. But I am using the same secret for my deployments in the same namespace and they successfully pull down the image. Is there a difference in the configuration of using imagePullSecrets between deployments and cronjobs?
Server Version: v1.9.3
CronJob config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: my-cronjob
spec:
concurrencyPolicy: Forbid
schedule: 30 13 * * *
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
imagePullSecrets:
- name: my-secret
containers:
- image: my-image
name: my-cronjob
command:
- my-command
args:
- my-args

How to schedule pods restart

Is it possible to restart pods automatically based on the time?
For example, I would like to restart the pods of my cluster every morning at 8.00 AM.
Use a cronjob, but not to run your pods, but to schedule a Kubernetes API command that will restart the deployment everyday (kubectl rollout restart). That way if something goes wrong, the old pods will not be down or removed.
Rollouts create new ReplicaSets, and wait for them to be up, before killing off old pods, and rerouting the traffic. Service will continue uninterrupted.
You have to setup RBAC, so that the Kubernetes client running from inside the cluster has permissions to do needed calls to the Kubernetes API.
---
# Service account the client will use to reset the deployment,
# by default the pods running inside the cluster can do no such things.
kind: ServiceAccount
apiVersion: v1
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
---
# allow getting status and patching only the one deployment you want
# to restart
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
resourceNames: ["<YOUR DEPLOYMENT NAME>"]
verbs: ["get", "patch", "list", "watch"] # "list" and "watch" are only needed
# if you want to use `rollout status`
---
# bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: deployment-restart
subjects:
- kind: ServiceAccount
name: deployment-restart
namespace: <YOUR NAMESPACE>
And the cronjob specification itself:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
spec:
concurrencyPolicy: Forbid
schedule: '0 8 * * *' # cron spec of time, here, 8 o'clock
jobTemplate:
spec:
backoffLimit: 2 # this has very low chance of failing, as all this does
# is prompt kubernetes to schedule new replica set for
# the deployment
activeDeadlineSeconds: 600 # timeout, makes most sense with
# "waiting for rollout" variant specified below
template:
spec:
serviceAccountName: deployment-restart # name of the service
# account configured above
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl # probably any kubectl image will do,
# optionaly specify version, but this
# should not be necessary, as long the
# version of kubectl is new enough to
# have `rollout restart`
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/<YOUR DEPLOYMENT NAME>'
Optionally, if you want the cronjob to wait for the deployment to roll out, change the cronjob command to:
command:
- bash
- -c
- >-
kubectl rollout restart deployment/<YOUR DEPLOYMENT NAME> &&
kubectl rollout status deployment/<YOUR DEPLOYMENT NAME>
Another quick and dirty option for a pod that has a restart policy of Always (which cron jobs are not supposed to handle - see creating a cron job spec pod template) is a livenessProbe that simply tests the time and restarts the pod on a specified schedule
ex. After startup, wait an hour, then check hour every minute, if hour is 3(AM) fail probe and restart, otherwise pass
livenessProbe:
exec:
command:
- exit $(test $(date +%H) -eq 3 && echo 1 || echo 0)
failureThreshold: 1
initialDelaySeconds: 3600
periodSeconds: 60
Time granularity is up to how you return the date and test ;)
Of course this does not work if you are already utilizing the liveness probe as an actual liveness probe ¯\_(ツ)_/¯
I borrowed idea from #Ryan Lowe but modified it a bit. It will restart pod older than 24 hours
livenessProbe:
exec:
command:
- bin/sh
- -c
- "end=$(date -u +%s);start=$(stat -c %Z /proc/1 | awk '{print int($1)}'); test $(($end-$start)) -lt 86400"
There's a specific resource for that: CronJob
Here an example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: your-cron
spec:
schedule: "*/20 8-19 * * 1-5"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: your-periodic-batch-job
spec:
containers:
- name: my-image
image: your-image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
change spec.concurrencyPolicy to Replace if you want to replace the old pod when starting a new pod. Using Forbid, the new pod creation will be skip if the old pod is still running.
According to cronjob-in-kubernetes-to-restart-delete-the-pod-in-a-deployment
you could create a kind: CronJob with a jobTemplate having containers. So your CronJob will start those containers with a activeDeadlineSeconds of one day (until restart). According to you example, it will be then schedule: 0 8 * * ? for 8:00AM
We were able to do this by modifying the manifest (passing a random param every 3 hours) file of deployment from a CRON job:
We specifically used Spinnaker for triggering deployments:
We created a CRON job in Spinnaker like below:
Configuration step looks like:
The Patch Manifest looks like: (K8S restarts PODS when YAML changes, to counter that check bottom of post)
As there can be a case where all pods can restart at a same time, causing downtime, we have a policy for Rolling Restart where maxUnavailablePods is 0%
spec:
# replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 0%
This spawns new pods and then terminates old ones.