I am using busybox container to understand kubernetes concepts.
but if run a simple test-pod.yaml with busy box image, it is in completed state instead of running state
can anyone explain the reason
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Look, you should understand the basic concept here. Your docker container will be running till its main process is live. And it will be completed as soon as your main process will stop.
Going step-by-step in your case:
You launch busybox container with main process "/bin/sh", "-c", "ls /etc/config/" and obviously this process has the end. Once command is completed and you listed directory - your process throw Exit:0 status, container stop to work and you see completed pod as a result.
If you want to run container longer, you should explicitly run some command inside the main process, that will keep your container running as long as you need.
Possible solutions
#Daniel's answer - container will execute ls /etc/config/ and will stay alive additional 3600 sec
use sleep infinity option. Please be aware that there was a long time ago issue, when this option hasn't worked properly exactly with busybox. That was fixed in 2019, more information here. Actually this is not INFINITY loop, however it should be enough for any testing purpose. You can find huge explanation in Bash: infinite sleep (infinite blocking) thread
Example:
apiVersion: v1
kind: Pod
metadata:
name: busybox-infinity
spec:
containers:
-
command:
- /bin/sh
- "-c"
- "ls /etc/config/"
- "sleep infinity"
image: busybox
name: busybox-infinity
you can use different varioations of while loops, tailing and etc etc. That only rely on your imagination and needs.
Examples:
["sh", "-c", "tail -f /dev/null"]
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
That is because busybox runs the command and exits. You can solve it by updating your command in the containers section with the following command:
[ "/bin/sh", "-c", "ls /etc/config/", "sleep 3600"]
Related
I created a pod which serve redis database and i want to leave it running when complete. Containers are meant to run to completion. Do i need to create and infinity loop which never ends ?
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: lfccncf/redis:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
If the container has a process which keeps running then you don't need to use infinite loop. In this case the container will run the redis process.The dockerfile will have the RUN command to execute the process.
Also I suggest you use a standard redis image or a helm chart to deploy redis.
Here is guide on running PHP guestbook app with redis
it is no need
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]enter code here
delete it
I am looking for a Kubectl wait command for init containers. Basically I need to wait until pod initialization of init container before proceeding for next step in my script.
I can see a wait option for pods but not specific to init containers.
Any clue how to achieve this
Please suggest any alternative ways to wait in script
You can run multiple commands in the init container or multiple init containers to do the trick.
Multiple commands
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
Refer: https://stackoverflow.com/a/33888424/3514300
* Multiple init containers
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
Refer: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use
I want to set an environment variable (I'll just name it ENV_VAR_VALUE) to a container during deployment through Kubernetes.
I have the following in the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
...
...
The container needs to use the ENV_VAR_VALUE's value.
But in the container's application logs, it's value always comes out empty.
So, I tried checking it's value from inside the container:
$ kubectl exec -it appname-service bash
root#appname-service:/# echo $ENV_VAR_VALUE
some.important.value
root#appname-service:/#
So, the value was successfully set.
I imagine it's because the environment variables defined from Kubernetes are set after the container is already initialized.
So, I tried overriding the container's CMD from the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
command: ["/bin/bash"]
args: ["-c", "application-command"]
...
...
Even still, the value of ENV_VAR_VALUE is still empty during the execution of the command.
Thankfully, the application has a restart function
-- because when I restart the app, ENV_VAR_VALUE get used successfully.
-- I can at least do some other tests in the mean time.
So, the question is...
How should I configure this in Kubernetes so it isn't a tad too late in setting the environment variables?
As requested, here is the Dockerfile.
I apologize for the abstraction...
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y some-dependencies
COPY application-script.sh application-script.sh
RUN ./application-script.sh
# ENV_VAR_VALUE is set in this file which is populated when application-command is executed
COPY app-config.conf /etc/app/app-config.conf
CMD ["/bin/bash", "-c", "application-command"]
You can try also running two commands in Kubernetes POD spec:
(read in env vars): "source /env/required_envs.env" (would come via secret mount in volume)
(main command): "application-command"
Like this:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
command: ["/bin/sh", "-c"]
args:
- source /env/db_cred.env;
application-command;
Why don't you move the
RUN ./application-script.sh
below
COPY app-config.conf /etc/app/app-config.conf
Looks like the app is running before the env conf is available for it.
I am trying to create a little Pod example with two containers that share data via an emptyDir volume. In the first container I am waiting a couple of seconds before it gets destroyed.
In the postStart I am writing a file to the shared volume with the name "started", in the preStop I am writing a file to the shared volume with the name "finished".
In the second container I am looping for a couple of seconds outputting the content of the shared volume but the "finished" file never gets created. Describing the pod doesn't show an error with the hooks either.
Maybe someone has an idea what I am doing wrong
apiVersion: v1
kind: Pod
metadata:
name: shared-data-example
labels:
app: shared-data-example
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: first-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "for i in {1..4}; do echo Welcome $i;sleep 1;done"]
imagePullPolicy: Never
env:
- name: TERM
value: xterm
volumeMounts:
- name: shared-data
mountPath: /myshareddata
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "echo First container finished > /myshareddata/finished"]
postStart:
exec:
command: ["/bin/sh", "-c", "echo First container started > /myshareddata/started"]
- name: second-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "for i in {1..20}; do ls /myshareddata;sleep 1;done"]
imagePullPolicy: Never
env:
- name: TERM
value: xterm
volumeMounts:
- name: shared-data
mountPath: /myshareddata
restartPolicy: Never
It is happening because the final status of your pod is Completed and applications inside containers stopped without any external calls.
Kubernetes runs preStop hook only if pod resolves an external signal to stop. Hooks were made to implement a graceful custom shutdown for applications inside a pod when you need to stop it. In your case, your application is already gracefully stopped by itself, so Kubernetes has no reason to call the hook.
If you want to check how a hook works, you can try to create Deployment and update it's image by kubectl rolling-update, for example. In that case, Kubernetes will stop the old version of the application, and preStop hook will be called.
I need to get a work-item from a work-queue and then sequentially run a series of containers to process each work-item. This can be done using initContainers (https://stackoverflow.com/a/46880653/94078)
What would be the recommended way of restarting the process to get the next work-item?
Jobs seem ideal but don't seem to support an infinite/indefinite number of completions.
Using a single Pod doesn't work because initContainers aren't restarted (https://github.com/kubernetes/kubernetes/issues/52345).
I would prefer to avoid the maintenance/learning overhead of a system like argo or brigade.
Thanks!
Jobs should be used for working with work queues. When using work queues you should not set the .spec.comletions (or set it to null). In that case Pods will keep getting created until one of the Pods exit successfully. It is a little awkward exiting from the (main) container with a failure state on purpose, but this is the specification. You may set .spec.parallelism to your liking irrespective of this setting; I've set it to 1 as it appears you do not want any parallelism.
In your question you did not specify what you want to do if the work queue gets empty, so I will give two solutions, one if you want to wait for new items (infinite) and one if want to end the job if the work queue gets empty (finite, but indefinite number of items).
Both examples use redis, but you can apply this pattern to your favorite queue. Note that the part that pops an item from the queue is not safe; if your Pod dies for some reason after having popped an item, that item will remain unprocessed or not fully processed. See the reliable-queue pattern for a proper solution.
To implement the sequential steps on each work item I've used init containers. Note that this really is a primitve solution, but you have limited options if you don't want to use some framework to implement a proper pipeline.
There is an asciinema if any would like to see this at work without deploying redis, etc.
Redis
To test this you'll need to create, at a minimum, a redis Pod and a Service. I am using the example from fine parallel processing work queue. You can deploy that with:
kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-pod.yaml
kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-service.yaml
The rest of this solution expects that you have a service name redis in the same namespace as your Job and it does not require authentication and a Pod called redis-master.
Inserting items
To insert some items in the work queue use this command (you will need bash for this to work):
echo -ne "rpush job "{1..10}"\n" | kubectl exec -it redis-master -- redis-cli
Infinite version
This version waits if the queue is empty thus it will never complete.
apiVersion: batch/v1
kind: Job
metadata:
name: primitive-pipeline-infinite
spec:
parallelism: 1
completions: null
template:
metadata:
name: primitive-pipeline-infinite
spec:
volumes: [{name: shared, emptyDir: {}}]
initContainers:
- name: pop-from-queue-unsafe
image: redis
command: ["sh","-c","redis-cli -h redis blpop job 0 >/shared/item.txt"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-1
image: busybox
command: ["sh","-c","echo step-1 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-2
image: busybox
command: ["sh","-c","echo step-2 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-3
image: busybox
command: ["sh","-c","echo step-3 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
containers:
- name: done
image: busybox
command: ["sh","-c","echo all done with `cat /shared/item.txt`; sleep 1; exit 1"]
volumeMounts: [{name: shared, mountPath: /shared}]
restartPolicy: Never
Finite version
This version stops the job if the queue is empty. Note the trick that the pop init container checks if the queue is empty and all the subsequent init containers and the main container immediately exit if it is indeed empty - this is the mechanism that signals Kubernetes that the Job is completed and there is no need to create new Pods for it.
apiVersion: batch/v1
kind: Job
metadata:
name: primitive-pipeline-finite
spec:
parallelism: 1
completions: null
template:
metadata:
name: primitive-pipeline-finite
spec:
volumes: [{name: shared, emptyDir: {}}]
initContainers:
- name: pop-from-queue-unsafe
image: redis
command: ["sh","-c","redis-cli -h redis lpop job >/shared/item.txt; grep -q . /shared/item.txt || :>/shared/done.txt"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-1
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-1 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-2
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-2 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-3
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-3 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
containers:
- name: done
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo all done with `cat /shared/item.txt`; sleep 1; exit 1"]
volumeMounts: [{name: shared, mountPath: /shared}]
restartPolicy: Never
The easiest way in this case is to use CronJob. CronJob runs Jobs according to a schedule. For more information go through documentation.
Here is an example (I took it from here and modified it)
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: sequential-jobs
spec:
schedule: "*/1 * * * *" #Here is the schedule in Linux-cron format
jobTemplate:
spec:
template:
metadata:
name: sequential-job
spec:
initContainers:
- name: job-1
image: busybox
command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" && sleep 5s; done;']
- name: job-2
image: busybox
command: ['sh', '-c', 'for i in 1 2 3; do echo "job-2 `date`" && sleep 5s; done;']
containers:
- name: job-done
image: busybox
command: ['sh', '-c', 'echo "job-1 and job-2 completed"']
restartPolicy: Never
his solution however has some limitations:
It cannot run more often than 1 minute
If you need to process your work-items one-by-one you need to create additional check for running jobs in InitContainer
CronJobs are available only in Kubernetes 1.8 and higher