Init Containers bypassing the command - kubernetes

I am following documentation example at https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use
I created following pod:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
initContainers:
- name: init-myservice
image: busybox
command:
[
"sh",
"-c",
"until nslookup myservice; do echo waiting for myservice; sleep 2; done;",
]
- name: init-mydb
image: busybox
command:
[
"sh",
"-c",
"until nslookup mydb; do echo waiting for mydb; sleep 2; done;",
]
containers:
- name: myapp-container
image: busybox
command: ["sh", "-c", "echo The app is running! && sleep 3600"]
but I did not create the services yet (myservice, mydb).
My expectation is for deployment to hold until I create services, but it just continues with deployment and creates the pod called "myapp-pod".
Am I missing something on this run?
Why it does not hold until I create the services?

This happens because you are using ash inside busybox and it has different behavior (not same as bash). So your script actually ends there.
You can try it inside busybox yourself:
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
And then use your command:
until nslookup myservice; do echo waiting for myservice; sleep 2; done;
To fix this issue you can try something different, for example alpine.
kubectl run -i --tty alpine --image=alpine --restart=Never -- sh

Related

Restart a Kubernetes Job or Pod with a different command

I'm looking for a way to quickly run/restart a Job/Pod from the command line and override the command to be executed in the created container.
For context, I have a Kubernetes Job that gets executed as a part of our deploy process. Sometimes that Job crashes and I need to run certain commands inside the container the Job creates to debug and fix the problem (subsequent Jobs then succeed).
The way I have done this so far is:
Copy the YAML of the Job, save into a file
Clean up the YAML (delete Kubernetes-managed fields)
Change the command: field to tail -f /dev/null (so that the container stays alive)
kubectl apply -f job.yaml && kubectl get all && kubectl exec -ti pod/foobar bash
Run commands inside the container
kubectl delete job/foobar when I am done
This is very tedious. I am looking for a way to do something like the following
kubectl restart job/foobar --command "tail -f /dev/null"
# or even better
kubectl run job/foobar --exec --interactive bash
I cannot use the run command to create a Pod:
kubectl run --image xxx -ti
because the Job I am trying to restart has certain volumeMounts and other configuration I need to reuse. So I would need something like kubectl run --from-config job/foobar.
Is there a way to achieve this or am I stuck with juggling the YAML definition file?
Edit: the Job YAML looks approx. like this:
apiVersion: batch/v1
kind: Job
metadata:
name: database-migrations
labels:
app: myapp
service: myapp-database-migrations
spec:
backoffLimit: 0
template:
metadata:
labels:
app: myapp
service: myapp-database-migrations
spec:
restartPolicy: Never
containers:
- name: migrations
image: registry.example.com/myapp:977b44c9
command:
- "bash"
- "-c"
- |
set -e -E
echo "Running database migrations..."
do-migration-stuff-here
echo "Migrations finished at $(date)"
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/example/myapp/app/config/conf.yml
name: myapp-config-volume
subPath: conf.yml
- mountPath: /home/example/myapp/.env
name: myapp-config-volume
subPath: .env
volumes:
- name: myapp-config-volume
configMap:
name: myapp
imagePullSecrets:
- name: k8s-pull-project
The commands you suggested don't exist. Take a look at this reference where you can find all available commands.
Based on that documentation the task of the Job is to create one or more Pods and continue retrying execution them until the specified number of successfully terminated ones will be achieved. Then the Job tracks the successful completions. You cannot just update the Job because these fields are not updatable. To do what's you want you should delete current job and create one once again.
I recommend you to keep all your configurations in files. If you have a problem with configuring job commands, practice says that you should modify these settings in yaml and apply to the cluster - if your deployment crashes - by storing the configuration in files, you have a backup.
If you are interested how to improve this task, you can try those 2 examples describe below:
Firstly I've created several files:
example job (job.yaml):
apiVersion: batch/v1
kind: Job
metadata:
name: test1
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "sleep 300"]
volumeMounts:
- name: foo
mountPath: "/script/foo"
volumes:
- name: foo
configMap:
name: my-conf
defaultMode: 0755
restartPolicy: OnFailure
patch-file.yaml:
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "echo 'patching test' && sleep 500"]
and configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-conf
data:
test: |
#!/bin/sh
echo "skrypt test"
If you want to automate this process you can use plugin
A plugin is a standalone executable file, whose name begins with kubectl-. To install a plugin, move its executable file to anywhere on your PATH.
There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the kubectl binary. A plugin determines which command path it wishes to implement based on its name.
Here is the file that can replace your job
A plugin determines the command path that it will implement based on its filename.
kubectl-job:
#!/bin/bash
kubectl patch -f job.yaml -p "$(cat patch-job.yaml)" --dry-run=client -o yaml | kubectl replace --force -f - && kubectl wait --for=condition=ready pod -l job-name=test1 && kubectl exec -it $(kubectl get pod -l job-name=test1 --no-headers -o custom-columns=":metadata.name") -- /bin/sh
This command uses an additional file (patch-job.yaml, see this link) - within we can put our changes for job.
Then you should change the permissions of this file and move it:
sudo chmod +x .kubectl-job
sudo mv ./kubectl-job /usr/local/bin
It's all done. Right now you can use it.
$ kubectl job
job.batch "test1" deleted
job.batch/test1 replaced
pod/test1-bdxtm condition met
pod/test1-nh2pv condition met
/ #
As you can see Job has been replaced (deleted and created).
You can also use single-line command, here is the example:
kubectl get job test1 -o json | jq "del(.spec.selector)" | jq "del(.spec.template.metadata.labels)" | kubectl patch -f - --patch '{"spec": {"template": {"spec": {"containers": [{"name": "test1", "image": "busybox", "command": ["/bin/sh", "-c", "sleep 200"]}]}}}}' --dry-run=client -o yaml | kubectl replace --force -f -
With this command you can change your job entering parameters "by hand". Here is the output:
job.batch "test1" deleted
job.batch/test1 replaced
As you can see this solution works as well.

why busybox container is in completed state instead of running state

I am using busybox container to understand kubernetes concepts.
but if run a simple test-pod.yaml with busy box image, it is in completed state instead of running state
can anyone explain the reason
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Look, you should understand the basic concept here. Your docker container will be running till its main process is live. And it will be completed as soon as your main process will stop.
Going step-by-step in your case:
You launch busybox container with main process "/bin/sh", "-c", "ls /etc/config/" and obviously this process has the end. Once command is completed and you listed directory - your process throw Exit:0 status, container stop to work and you see completed pod as a result.
If you want to run container longer, you should explicitly run some command inside the main process, that will keep your container running as long as you need.
Possible solutions
#Daniel's answer - container will execute ls /etc/config/ and will stay alive additional 3600 sec
use sleep infinity option. Please be aware that there was a long time ago issue, when this option hasn't worked properly exactly with busybox. That was fixed in 2019, more information here. Actually this is not INFINITY loop, however it should be enough for any testing purpose. You can find huge explanation in Bash: infinite sleep (infinite blocking) thread
Example:
apiVersion: v1
kind: Pod
metadata:
name: busybox-infinity
spec:
containers:
-
command:
- /bin/sh
- "-c"
- "ls /etc/config/"
- "sleep infinity"
image: busybox
name: busybox-infinity
you can use different varioations of while loops, tailing and etc etc. That only rely on your imagination and needs.
Examples:
["sh", "-c", "tail -f /dev/null"]
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
That is because busybox runs the command and exits. You can solve it by updating your command in the containers section with the following command:
[ "/bin/sh", "-c", "ls /etc/config/", "sleep 3600"]

Kubectl wait command for init containers

I am looking for a Kubectl wait command for init containers. Basically I need to wait until pod initialization of init container before proceeding for next step in my script.
I can see a wait option for pods but not specific to init containers.
Any clue how to achieve this
Please suggest any alternative ways to wait in script
You can run multiple commands in the init container or multiple init containers to do the trick.
Multiple commands
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
Refer: https://stackoverflow.com/a/33888424/3514300
* Multiple init containers
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
Refer: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use

Registering multiple Services and configure route in KONG with file

Whenever I need to register my EKS services and required routes with kong, I have to manually execute CURL method( post/get ) commands for same, Services and routes get register successfully, but my requirement is to build or automate above multiple configurations with KONG, some way like producing a YAML file for all service registrations and routes for KONG and then executing at once.
I explored all the sources, even KONG official documentation, but couldn't find any way which ease my requirement
###################### Adding Svc ##########################################
curl -k -i -X POST \
--url https://localhost:7001/services/ \
--data 'name=hello-world1' \
--data 'host=service-helloworld' \
--data 'port=80'
###################### Adding Route ##########################################
curl -k -i -X POST --url https://localhost:7001/services/hello-world/routes --data 'paths=/hello-world' --data 'methods[]=GET'
Some way to automate above CURL commands
If I understand you correctly those are some of the ways you are looking for:
Container Lifecycle Hooks
In your case you would want to use PostStart
This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
Hook handler implementations
Containers can access a hook by implementing and registering a handler for that hook. There are two types of hook handlers that can be implemented for Containers:
Exec - Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container.
HTTP - Executes an HTTP request against a specific endpoint on the Container.
Your pod might look like the following example:
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
curl -k -i -X POST --url https://localhost:7001/services/ --data 'name=hello-world1' --data 'host=service-helloworld' --data 'port=80';
curl -k -i -X POST --url https://localhost:7001/services/hello-world/routes --data 'paths=/hello-world' --data 'methods[]=GET'
Init Containers
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
And here is an example from docs:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']

Kubernetes - processing an unlimited number of work-items

I need to get a work-item from a work-queue and then sequentially run a series of containers to process each work-item. This can be done using initContainers (https://stackoverflow.com/a/46880653/94078)
What would be the recommended way of restarting the process to get the next work-item?
Jobs seem ideal but don't seem to support an infinite/indefinite number of completions.
Using a single Pod doesn't work because initContainers aren't restarted (https://github.com/kubernetes/kubernetes/issues/52345).
I would prefer to avoid the maintenance/learning overhead of a system like argo or brigade.
Thanks!
Jobs should be used for working with work queues. When using work queues you should not set the .spec.comletions (or set it to null). In that case Pods will keep getting created until one of the Pods exit successfully. It is a little awkward exiting from the (main) container with a failure state on purpose, but this is the specification. You may set .spec.parallelism to your liking irrespective of this setting; I've set it to 1 as it appears you do not want any parallelism.
In your question you did not specify what you want to do if the work queue gets empty, so I will give two solutions, one if you want to wait for new items (infinite) and one if want to end the job if the work queue gets empty (finite, but indefinite number of items).
Both examples use redis, but you can apply this pattern to your favorite queue. Note that the part that pops an item from the queue is not safe; if your Pod dies for some reason after having popped an item, that item will remain unprocessed or not fully processed. See the reliable-queue pattern for a proper solution.
To implement the sequential steps on each work item I've used init containers. Note that this really is a primitve solution, but you have limited options if you don't want to use some framework to implement a proper pipeline.
There is an asciinema if any would like to see this at work without deploying redis, etc.
Redis
To test this you'll need to create, at a minimum, a redis Pod and a Service. I am using the example from fine parallel processing work queue. You can deploy that with:
kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-pod.yaml
kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-service.yaml
The rest of this solution expects that you have a service name redis in the same namespace as your Job and it does not require authentication and a Pod called redis-master.
Inserting items
To insert some items in the work queue use this command (you will need bash for this to work):
echo -ne "rpush job "{1..10}"\n" | kubectl exec -it redis-master -- redis-cli
Infinite version
This version waits if the queue is empty thus it will never complete.
apiVersion: batch/v1
kind: Job
metadata:
name: primitive-pipeline-infinite
spec:
parallelism: 1
completions: null
template:
metadata:
name: primitive-pipeline-infinite
spec:
volumes: [{name: shared, emptyDir: {}}]
initContainers:
- name: pop-from-queue-unsafe
image: redis
command: ["sh","-c","redis-cli -h redis blpop job 0 >/shared/item.txt"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-1
image: busybox
command: ["sh","-c","echo step-1 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-2
image: busybox
command: ["sh","-c","echo step-2 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-3
image: busybox
command: ["sh","-c","echo step-3 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
containers:
- name: done
image: busybox
command: ["sh","-c","echo all done with `cat /shared/item.txt`; sleep 1; exit 1"]
volumeMounts: [{name: shared, mountPath: /shared}]
restartPolicy: Never
Finite version
This version stops the job if the queue is empty. Note the trick that the pop init container checks if the queue is empty and all the subsequent init containers and the main container immediately exit if it is indeed empty - this is the mechanism that signals Kubernetes that the Job is completed and there is no need to create new Pods for it.
apiVersion: batch/v1
kind: Job
metadata:
name: primitive-pipeline-finite
spec:
parallelism: 1
completions: null
template:
metadata:
name: primitive-pipeline-finite
spec:
volumes: [{name: shared, emptyDir: {}}]
initContainers:
- name: pop-from-queue-unsafe
image: redis
command: ["sh","-c","redis-cli -h redis lpop job >/shared/item.txt; grep -q . /shared/item.txt || :>/shared/done.txt"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-1
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-1 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-2
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-2 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-3
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-3 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
containers:
- name: done
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo all done with `cat /shared/item.txt`; sleep 1; exit 1"]
volumeMounts: [{name: shared, mountPath: /shared}]
restartPolicy: Never
The easiest way in this case is to use CronJob. CronJob runs Jobs according to a schedule. For more information go through documentation.
Here is an example (I took it from here and modified it)
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: sequential-jobs
spec:
schedule: "*/1 * * * *" #Here is the schedule in Linux-cron format
jobTemplate:
spec:
template:
metadata:
name: sequential-job
spec:
initContainers:
- name: job-1
image: busybox
command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" && sleep 5s; done;']
- name: job-2
image: busybox
command: ['sh', '-c', 'for i in 1 2 3; do echo "job-2 `date`" && sleep 5s; done;']
containers:
- name: job-done
image: busybox
command: ['sh', '-c', 'echo "job-1 and job-2 completed"']
restartPolicy: Never
his solution however has some limitations:
It cannot run more often than 1 minute
If you need to process your work-items one-by-one you need to create additional check for running jobs in InitContainer
CronJobs are available only in Kubernetes 1.8 and higher