Deploying container as a Job to (Google) Kubernetes Engine - How to terminate Pod after completing task - kubernetes

Goal is to terminate the pod after completion of Job.
This is my yaml file. Currently, my pod status is completed after running the job.
apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: container-name
image: my-img
command: ["python", "main.py"]
# Do not restart containers after they exit
restartPolicy: Never
# of retries before marking as failed.
backoffLimit: 4

You can configure and remove the jobs once complete
inside the YAML you can configure limit of keeping the PODs
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
you can set the history limits using above config in YAML.
The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit
fields are optional. These fields specify how many completed and
failed jobs should be kept. By default, they are set to 3 and 1
respectively. Setting a limit to 0 corresponds to keeping none of the
corresponding kind of jobs after they finish.
backoffLimit: 4 will retires a number of times and try to run the job before marking it as a failed one.
Read more at : https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits

A job of a pod basically terminates itself after the main container of that pod finishes successful. If it returns a failure error code it will retry as many times as you specified in your backoffLimit.
So it seems as if your container does not terminate after it finishes whatever job it is supposed to do. Without knowing anything about your job image I cannot tell you what you need to do exactly.
However, it seems as if you need to adapt your main.py to properly exit after it has done what it is supposed to do.

If you want to delete the pod after completing the task, then just delete the job with kubectl:
$ kubectl delete jobs
You can also use yaml file script to automatically delete the jobs by using the following command:
$kubectl delete -f ./job. yaml
When you delete the job using kubectl, all the created pods get deleted too.
You can check whether these jobs and pods deleted or not with the following commands:
$ kubectl get jobs and $ kubectl get pods
For more details refer to the Jobs.
I have tried the above steps in my own environment and it worked for me.

Related

Kubernetes: How to update a live busybox container's 'command'

I have the following manifest that created the running pod named 'test'
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
app: blue
spec:
containers:
- name: funskies
image: busybox
command: ["/bin/sh", "-c", "echo 'Hello World'"]
I want to update the pod to include the additional command
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
app: blue
spec:
containers:
restartPolicy: Never
- name: funskies
image: busybox
command: ["/bin/sh", "-c", "echo 'Hello World' > /home/my_user/logging.txt"]
What I tried
kubectl edit pod test
What resulted
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# pods "test" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`...
Other things I tried:
Updated the manifest and then ran apply - same issue
kubectl apply -f test.yaml
Question: What is the proper way to update a running pod?
You can't modify most properties of a Pod. Typically you don't want to directly create Pods; use a higher-level controller like a Deployment.
The Kubernetes documentation for a PodSpec notes (emphasis mine):
containers: List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.
In all cases, no matter what, a container runs a single command, and if you want to change what that command is, you need to delete and recreate the container. In Kubernetes this always means deleting and recreating the containing Pod. Usually you shouldn't use bare Pods, but if you do, you can create a new Pod with the new command and delete the old one. Deleting Pods is extremely routine and all kinds of ordinary things cause it to happen (updating Deployments, a HorizontalPodAutoscaler scaling down, ...).
If you have a Deployment instead of a bare Pod, you can freely change the template: for the Pods it creates. This includes changing their command:. This will result in the Deployment creating a new Pod with the new command, and once it's running, deleting the old Pod.
The sorts of very-short-lived single-command containers you show in the question aren't necessarily well-suited to running in Kubernetes. If the Pod isn't going to stay running and serve requests, a Job could be a better match; but a Job believes it will only be run once, and if you change the pod spec for a completed Job I don't think it will launch a new Pod. You'd need to create a new Job for this case.
I am not sure what the whole requirement is.
but you can exec to the pod and update the details
$ kubectl exec <pod-name> -it -n <namespace> -- <command to execute>
like,
$ kubectl exec pod/hello-world-xxxx-xx -it -- /bin/bash
if tty support shell, use "/bin/sh" to update the content or command.
Editing the running pod, will not retain the changes in manifest file. so in that case you have to run a new pod with the changes.

How to run kubernetes cronjob immediately

Im very new to kubernetes ,here i tired a cronjob yaml in which the pods are created at every 1 minute.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
but the pods are created only after 1 minute.is it possible to run job immediately and after that every 1 minute ?
As already stated in the comments CronJob is backed by Job. What you can do is literally launch Cronjob and Job resources using the same spec at the same time. You can do that conveniently using helm chart or kustomize.
Alternatively you can place both manifests in the same file or two files in the same directory and then use:
kubectl apply -f <file/dir>
With this workaround initial Job is started and then after some time Cronjob.
The downside of this solution is that first Job is standalone and it is not included in the Cronjob's history. Another possible side effect is that the first Job and first CronJob can run in parallel if the Job cannot finish its tasks fast enough. concurrencyPolicy does not take that Job into consideration.
From the documentation:
A cron job creates a job object about once per execution time of its
schedule. We say "about" because there are certain circumstances where
two jobs might be created, or no job might be created. We attempt to
make these rare, but do not completely prevent them.
So if you want to keep the task execution more strict, perhaps it may be better to use Bash wrapper script with sleep 1 between task executions or design an app that forks sub processes after specified interval, create a container image and run it as a Deployment.

Changing image of kubernetes job

I'm working on the manifest of a kubernetes job.
apiVersion: batch/v1
kind: Job
metadata:
name: hello-job
spec:
template:
spec:
containers:
- name: hello
image: hello-image:latest
I then apply the manifest using kubectl apply -f <deployment.yaml> and the job runs without any issue.
The problem comes when i change the image of the running container from latest to something else.
At that point i get a field is immutable exception on applying the manifest.
I get the same exception either if the job is running or completed. The only workaround i found so far is to delete manually the job before applying the new manifest.
How can i update the current job without having to manually delete it first?
I guess you are probably using an incorrect kubernetes resource . Job is a immutable Pod that runs to completion , you cannot update it . As per Kubernetes documentation ..
Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable. Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=false.
If you intend to update an image you should either use Deployment or Replication controller which supports updates

Is there a way to delete pods automatically through YAML after they have status 'Completed'?

I have a YAML file which creates a pod on execution. This pod extracts data from one of our internal systems and uploads to GCP. It takes around 12 mins to do so after which the status of the pod changes to 'Completed', however I would like to delete this pod once it has completed.
apiVersion: v1
kind: Pod
metadata:
name: xyz
spec:
restartPolicy: Never
volumes:
- name: mount-dir
hostPath:
path: /data_in/datos/abc/
initContainers:
- name: abc-ext2k8s
image: registrysecaas.azurecr.io/secaas/oracle-client11c:11.2.0.4-latest
volumeMounts:
- mountPath: /media
name: mount-dir
command: ["/bin/sh","-c"]
args: ["sqlplus -s CLOUDERA/MYY4nGJKsf#hal5:1531/dbmk #/media/ext_hal5_lk_org_localfisico.sql"]
imagePullSecrets:
- name: regcred
Is there a way to acheive this?
Typically you don't want to create bare Kubernetes pods. The pattern you're describing of running some moderate-length task in a pod, and then having it exit, matches a Job. (Among other properties, a job will reschedule a pod if the node it's on fails.)
Just switching this to a Job doesn't directly address your question, though. The documentation notes:
When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status.
So whatever task creates the pod (or job) needs to monitor it for completion, and then delete the pod (or job). (Consider using the watch API or equivalently the kubectl get -w option to see when the created objects change state.) There's no way to directly specify this in the YAML file since there is a specific intent that you can get useful information from a completed pod.
If this is actually a nightly task that you want to run at midnight or some such, you do have one more option. A CronJob will run a job on some schedule, which in turn runs a single pod. The important relevant detail here is that CronJobs have an explicit control for how many completed Jobs they keep. So if a CronJob matches your pattern, you can set successfulJobsHistoryLimit: 0 in the CronJob spec, and created jobs and their matching pods will be deleted immediately.

Not able to see Pod when I create a Job

When I try to create Deployment as Type Job, it's not pulling any image.
Below is .yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: copyartifacts
spec:
backoffLimit: 1
template:
metadata:
name: copyartifacts
spec:
restartPolicy: "Never"
volumes:
- name: sharedvolume
persistentVolumeClaim:
claimName: shared-pvc
- name: dockersocket
hostPath:
path: /var/run/docker.sock
containers:
- name: copyartifacts
image: alpine:3.7
imagePullPolicy: Always
command: ["sh", "-c", "ls -l /shared; rm -rf /shared/*; ls -l /shared; while [ ! -d /shared/artifacts ]; do echo Waiting for artifacts to be copied; sleep 2; done; sleep 10; ls -l /shared/artifacts; "]
volumeMounts:
- mountPath: /shared
name: sharedvolume
Can you please guide here?
Regards,
Vikas
There could be two possible reasons for not seeing pod.
The pod hasn't been created yet.
The pod has completed it's task and terminated before you have noticed.
1. Pod hasn't been created:
If pod hasn't been created yet, you have to find out why the job failed to create pod. You can view job's events to see if there are any failure event. Use following command to describe a job.
kubectl describe job <job-name> -n <namespace>
Then, check the Events: field. There might be some events showing pod creation failure with respective reason.
2. Pod has completed and terminated:
Job's are used to perform one-time task rather than serving an application that require to maintain a desired state. When the task is complete, pod goes to completed state then terminate (but not deleted). If your Job is intended for a task that does not take much time, the pod may terminate after completing the task before you have noticed.
As the pod is terminated, kubectl get pods will not show that pod. However, you will able to see the pod using kubectl get pods -a command as it hasn't been deleted.
You can also describe the job and check for completion or success event.
if you use kind created the K8s cluster, all the cluster node run as docker. If you had reboot you computer or VM, the cluster (pod) ip address may change, leeding to the cluster node internet communication failed. In this case, see the cluster manager logs, it has error message. Job created, but pod not.
try to re-create the cluster, or change the node config about ip address.