I have a pod named 'sample_pod' and a container named 'sample_container' running inside the pod. sample_container's entry point is a python bin file (sample.py). Inside this container, I have CRL certificates which gets refreshed every one hour and sample.py does not know about the refreshed certificates without reloading it.
I need to reload that container every one hour without killing/restarting that container. This is exactly similar to systemd reload option. Is there any specific command to reload that I can run/schedule for every one hour inside sample_container?
If so, how can I schedule to run that command inside container every one hour? Or is there a kubernetes native approach to achieve this?
For your use case, just do not use containers but use instead classical server with a cron task. (cf. my comment under your question)
Related
Let's say I want to execute a cleanup script whenever container termination is triggered. How do I go about this using docker-compose?
This could be handy to automatically back up the files, databases, etc for the dev container.
docker containers are meant to be ephemeral:
By "ephemeral", we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
Building upon this concept docker itself does not offer anything to hook into the shutdown process. docker-compose is built on top of docker and also does not add such functionality.
Maybe you can rethink your problem the docker way to better fit the intended use of docker. Without further context it is hard to say what could be a good solution but maybe one of the following approaches helps you out:
docker stop sends a SIGTERM signal to the main process in the container. You could use a custom entrypoint or supervisor process that would trigger the appropriate actions on a SIGTERM. This approach requires custom containers. With the stop_signal attribute you can also configure a custom signa to be sent in your docker-compose.yml
if you just want to persist data files from the containers just configuring the right volumes might be enough
you could use docker events to listen and act upon any types of events emitted by the docker daemon
I've been experimenting and searching for a long time without finding an answer that works.
I have a Windows Container and I need to embed a startup script for each time a new container is created.
All the answers I found suggest one of the following:
Add the command to the dockerfile - this is not good because it will only run when the image is built. I need it to run every single time a new container is created from the image,
use docker exec after starting a container - this is also not what I need. These images are intended to be "shippable". I need the script to run without any special action apart from creating a new container.
Using ENTRYPOINT - I had 2 cases here. It either fails and immediately exits. Or it succeeds but the container stops. I need it to keep running.
Basically, the goal of this is to do some initial configuration on the container when it starts and keep it running.
The actions are around generating a GUID and registering the hostname. These have to be unique which is why I need to run them immediately when the container starts.
Looks like CMD in the dockerfile is all I needed. I used:
CMD powershell -file
I simply checked in the script if it's the first time it is running
i am trying to setup a complete GitLab Routine to setup my Kubernetes Cluster with all installations and configurations automatically incl. decommissioning at the end.
However the Creation and Decommissioning Progress is one of the most time consuming because i am basically waiting for the provisioning till i can execute further commands.
as i have some times troubles in the bases of the Kubernetes Setup, i currently decomission my cluster and create a new one. But this is pretty un-economical and time consuming.
Question:
Is there a command or a series of commands to completely reset a Kubernetes to his state after creation ?
The closest is probably to do all your work in a new namespace and then delete that namespace when you are done. That automatically deletes all objects inside it.
I am creating the deployments/services using REST APIs. I send POST request with bodies which contain the JSON objects which create the applications on Openshift. After I call all the APIs, these objects get instantiated.
I have 2 deployments which are dependent on mongodb deployment but this mongodb takes a little longer to start running, while the two deployments which are dependent on mongodb start running earlier. This breaks the code inside the 2 deployments as the mongodb connection fails(since it is not up yet).
There could be 2 possible way I can fix this problem.
I put a delay after i create mongodb deployment and recursively call the API to check it's status if it is running or not.
Just like we make changes in docker-compose, with the key, depends-on which tell the docker-compose that all the dependencies should be started first and then the dependent container.
Is there any way this could be achieved in openshift?
Instead of implementing complex logic for dependency handling, use health checking mechanism of Kubernetes. If your application starts and doesn't see Mongo DB, let it crash. Kubernetes will keep restarting it until Mongo DB comes online, and your application becomes healthy and serving as well. Kubernetes won't send traffic to not yet healthy instances.
Docs: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
Just like we make changes in docker-compose, with the key, depends-on which tell the docker-compose that all the dependencies should be started first and then the dependent container.
You might want to look into Init Containers for dependent container. They run to completion before container is actually started. Below excerpt is taken from referenced documentation (given below) for use cases that might be applicable to your issue:
They run to completion before any app Containers start, whereas app Containers run in parallel, so Init Containers provide an easy way to block or delay the startup of app Containers until some set of preconditions are met.
Examples
Here are some ideas for how to use Init Containers:
Wait for a service to be created with a shell command like:
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
Register this Pod with a remote server from the downward API with a command like:
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d ‘instance=$()&ip=$()’
Wait for some time before starting the app Container with a command like sleep 60.
Reference documentation:
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Alex has pointed out correct practice to follow with kubernetes. But if you still want directly depend on other pod phase you can use this pod-dependency-init-container that I have build. This will check if any pod with given labels is running before starting your pod.
Say I have a container image that contains a large command-line program that is executed from the shell. I have another container that contains a scheduler whose job it is to invoke the first container when it receives a certain signal. For various reasons I don't want to put them in the same container (mainly because the scheduler can invoke many different tools, and different versions of those tools, and I don't want to have to put all the tools and their versions in the same container image.)
I know how to put two containers in the same pod. However, the default behavior is to run both containers at startup. What I want to be able to do is to have the scheduler be able to decide when to invoke the other container, and to be able to specify the command-line arguments (and ideally, environment variables) for it. Also, I need to know the exit status. Extra credit for getting stdout/stderr, but I can hack around with volumes if I need to.
I also know how to do this if the second container was a server, but in this case it's a shell program.
A quick way to do this is:
Add a kubectl proxy in your container startup
Then call a kubernetes job from the first pod.
This would create a lightweight solution in which the desired job can be queried for success state, seemingly fulfilling your requirements