Is there any approach that one container can call command in another container? The containers are in the same pod.
I need many command line tools which are shipped as image as well as in packages. But I don’t want to install all of them into one container because of some concerns.
This is very possible as long as you have k8s v1.17+. You must enable shareProcessNamespace: true and then all the container processes are available to other containers in the same pod.
Here are the docs, have a look.
In general, no, you can't do this in Kubernetes (or in plain Docker). You should either move the two interconnected things into the same container, or wrap some sort of network service around the thing you're trying to call (and then probably put it in a separate pod with a separate service in front of it).
There might be something you could do if you set up a service account, installed a Kubernetes API sidecar container, and used the Kubernetes API to do the equivalent of kubectl exec, but I'd consider this a solution of last resort.
Containers in pod are isolated from each other except that they share volume and network namespace. So you would not be able to execute command from one container into another. However, you could expose the commands in container through APIs
We are currently running EKS v1.20 and we were able to achieve this using the shareProcessNamespace: true that mr haven mention. In our particular case, we needed a debian 10 php container to execute a SAS binary command with arguments. SAS is installed and running in a centos 7 container in the same pod. Using helm, we enabled shareProcessNamespace and in the container's arguments and command fields we built symlinks to that binary using bash -c once the pod came online. We grabbed the pid of the shared container by using pgrep and since we know that the centos container's entry point is tail -f /dev/null so we just look for that process $(pgrep tail) initially.
- image: some_php_container
command: ["bash", "-c"]
args: [ "SAS_PROC_PID=$(pgrep tail) && \
ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8 /usr/bin/sas && \
ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS /usr/local/SAS && \
. /opt/script_runner.sh" ]
Now the php container is able to execute the sas command with arguments and process data files using the SAS software running on the centos container.
One issue we quickly found out is if the resulting SAS container happened to die in the pod, the pid would change and thus the symlinks would be broken on the php container. So we just put in liveness probe to frequently check to see if the path to binary using current pid exist, if the probe fails, it restarts the php container and thus rebuilding the symlinks with the right pid.
livenessProbe:
exec:
command:
- bash
- -c
- SAS_PROC_PID=$(pgrep tail)
- test -f /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 1
Hopefully above info can help someone else.
You can do this without shareProcessNamespace by using a shared volume and some named pipes. It manages all the I/O for you and is trivially simple and extremely fast.
For a complete description and code, see this solution I created. Contains examples.
Related
I have two containers on same k8 pod, container1, container2
I have python script in container1, How can I run this python script on container2 ?
I mean something like ssh:
ssh user#container2 python < script_on_container1.py
You can't; you need to make a network request from one container to the other.
I would suggest prototyping this first in a local Python virtual environment. In container2, run some lightweight HTTP server like Flask, and when it receives a request, it calls whatever code the script would have run. Once you've demonstrated this working, you can set up the same thing in local Docker containers, and if that works, make these two processes into two separate Kubernetes deployments.
In general with containers you can't "run commands" in other containers. In your proposed solution, containers don't typically run ssh daemons, and maintaining valid login credentials without compromising them is really hard. (If you put an ssh private key into a Docker image, anyone who has the image can trivially extract it. Kubernetes secrets make this a little bit better but not great.) The standard pattern for one container to invoke another is through a network path, generally either a direct HTTP request or through a queueing system like RabbitMQ.
A workaround for your problem is by doing the following steps:
Create a volume and share it between both containers (con1 and con2) of the pod.
In con1, create a cronjob that runs after <X> mins and read a file that is available on the shared volume.
So whenever con2 makes any changed in that file, con1 will perform any operation specified in the shared file via share volume.
I hope this helps.
You can use this script; https://gist.github.com/ahmetkotan/b592bf9919cfac7daa2e2e6c2e95772d
python kubernetes_api_exec.py <pod_name> container2 python < script_on_container1.py
or another commands. For example;
python kubernetes_api_exec.py <pod_name> container2 date
python kubernetes_api_exec.py <pod_name> container2 uname -a
I have a Daemonset that places a pod onto all of my cluster's nodes. That pod looks for a set of conditions. When they are found it is supposed to execute a bash script on its node.
Currently my pod that I apply as a daemon set mounts the directory with the bash script. I am able to detect the conditions that I am looking for. When the conditions are detected I execute the bash script but it ends up running in my alpine container inside my pod and not on the host node.
As as simple example of what is not working for me (in spec):
command: ["/bin/sh"]
args: ["-c", "source /mounted_dir/my_node_script.sh"]
I want to execute the bash script on the NODE the pod is running on, not within the container/pod. How can this be accomplished?
Actually a command run inside a pod is run on the host. It's a container (Docker), not a virtual machine.
If your actual problem is that you want to do something, which a normal container isn't allowed to, you can run a pod in privileged mode or configure whatever you exactly need.
I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.
I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.
I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.
Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed.
I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.
I don't think you can have a Deployment which creates PODs from different Specs. You can't have it in Kubernetes and Helm won't help here (since Helm is just a template manager over Kubernetes configurations).
What you can do is to specify each Pod as a separate configuration (if single Pod, you don't necessarily need Deployment) and let Helm manage it.
Posting the solution I used since it could be useful for other people searching around.
In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.
StatefulSet can declare a property spec.serviceName which can have the same name of a headless service to create a unique network name for each Pod. Something like <pod_name>.<service_name>
Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.
Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.
This way, the StatefulSet is scalable at run time and works as expected.
hey I am not sure if this will exactly match your scenario but I think this is what you can try. Use a sidecar container to run the replica instances, A sidecar is a container which runs along with the main container and also shares the same namespace and can share volumes across each container.
Now to pass the different arguments to each container or sidecar, you will have to tweak the dockerfile or rather tweak the way your container starts.
Create a start.sh script file which accepts the arguments and starts the container with those arguments, the trick here is to accept the argument from environment variables thus allowing you to later configure these from configmaps or pod env.
So here is an example of php/laravel application running the same code and starting with different arguments. And the start.sh the file looks like this.
#!/bin/sh
if [ "${CONTAINER_ROLE}" = "queue" ];
then
echo "Running the queue..."
php artisan queue:work --queue=${QUEUENAME}
echo "Queue Started"
else
echo "Running Iceberg."
exec apache2-foreground
fi
So a sample dockerfile looks like this
FROM php:7.1.24-apache
COPY . /srv/myapp
...
...
RUN chown -R www-data:www-data /srv/app \
&& a2enmod remoteip && a2enmod rewrite
WORKDIR /srv/app
RUN chmod +x .docker/start.sh
CMD [ "sh",".docker/start.sh"]
Let me know how it goes.
Is there any way I can exec into the container, then edit some code (ex: add some log, edit come configuration file, etc) and restart the container to see what happens?
I tried to search for this but found nothing helpful.
The point is, I want to do a quick debug, not to do a full cluster deployment.
Some programs (like ie. nginx) support configuration reload without restarting their process, with these you can just kubectl exec change config and send a signal to master process (ie. kubectl exec <nginx_pod> kill -HUP 1). It is a feature of the software though, so many will not take that into account.
Containers are immutable by design, so they restart with a clean state each time. That said, with no simple way of doing this, there are hackish ways to achieve it.
One I can think of involves modifying the image on the node that will then restart the container. If you can ssh into the node and access docker directly, you can identify the container with a modified file and commit these changes with docker commit under the same tag. At that point your local container with that tag has your changes baked in so if you restart it (not reschedule, as it could start on different node), it will come up with your changes (assuming you do not use pullPolicy: always).
Again, not the way it's meant to be used, but achievable.
Any changes to the local container file system will be lost if you restart the pod You would need to work out whether the application stack you are using can perform an internal restart without actually exiting.
What language/application stack are you using?
You should at least consider an hostPath volume, in order to share local files on your host with your Kubernetes instance, in order to be able to do that kind of test.
After that, it is up to your application running within your pod to detect the file change, and restart if needed (ie, this is not specific to Kubernetes at all)
You could put any configuration in a configmap then just apply that, obviously assuming what reads the configmap would re-read it.
Same issue i have faced in my container as well
I have done the below steps in kubernate conatiner, it worked.
logged into pod eg:
kubectl exec --stdin --tty nginx-6799fc88d8-mlvqx -- /bin/bash
Once i logged in to application pod ran the below commands
#apt-get update
#apt-get install vim
Now am able to use vim editor in kubernate container.
My web application consists from 3 docker containers: app (main container with code), redis and node. I have the deployment shell script which do the following things:
clones master from git (git clone <...> $REVISION)
removes all files from document root directory (rm -rf $PROJECT_DIR)
move everything cloned into document root (mv $REVISION $PROJECT_DIR)
stop all running containers: (docker-compose stop)
remove all stopped containers (docker-compose rm -f)
build containers (docker-compose build)
run all built containers (docker-compose up -d)
run all init and start scripts inside the containers via docker exec (for example: config compilers, nginx reload)
And this works fine for me, but I have several doubts in this scheme:
In step 6, if I don't change files into node container, it will use already built image - it is fast. But if I change something, the container will build again - it is slow and increases unused images
In the worst case (when I made changes into node code) deployment lasts maybe about 2-3 minutes, in the best case - about 30 seconds. But even then it's a downtime for some users.
As I think, I need the availability to build the new container (in parralel of old container continue working), and only after a successfull status - change the tag of latest container, which is used by the app. How can I do this?
Will be very thankful for your comments.
What I do is tag all my images by version in addition to tagging then "latest". So I have one image with multiple tags. Just tag it with more than one. When you tag by version, it lets you move around the "latest" tag without problems:
docker build -t=myApp .
docker tag myApp:latest myApp:0.8.1
Now when you docker images you'll see the same image listed twice, just with different tags (both "latest" and "0.8.1"). So when you go to build something like you mention:
# the original container is still running while this builds ...
docker build -t=myApp .
# now tag "latest" to the newest version
docker tag myApp:latest myApp:0.8.2
# and now you can just stop and restart the container ...
docker rename myApp myApp-old
docker run -d --name=myApp -p 80:80 myApp:latest
This is something you could do, but it looks like you're really needing a way to swap containers without any downtime. Zero-downtime container changes.
There is a process I have used for a couple of years now of using an Nginx reverse proxy for your Docker containers. Jason Wilder details in this blog post the process for doing so.
I'll give you an overview of what this will do for you. The jwilder/nginx-proxy docker image will serve as a reverse proxy for your containers, and by default it round-robin load-balances inbound connections to containers based on the hostname. After you build and run a container with the same VIRTUAL_HOST environment variable, nginx-proxy automatically round-robin load-balances the two containers. This way, you can start the new container, and it will begin servicing requests. Then you can just bring down your other, old container. Zero-downtime updates.
Just some details: The nginx-proxy image uses Jason Wilder's docker-gen utility to automatically grab the docker container information and then route requests to each. What this means is that you start your normal containers with a new environment variable (VIRTUAL_HOST) and nginx-proxy will automatically begin routing inbound requests to the container. This is best used to "share" a port (e.g. tcp/80) among many containers. Also this reverse proxy means it can handle HTTPS as well as HTTP authentication, so that you don't have to handle it inside your web containers. The backend is unencrypted (HTTP) but since it's on the same host, no problem.