I have two containers on same k8 pod, container1, container2
I have python script in container1, How can I run this python script on container2 ?
I mean something like ssh:
ssh user#container2 python < script_on_container1.py
You can't; you need to make a network request from one container to the other.
I would suggest prototyping this first in a local Python virtual environment. In container2, run some lightweight HTTP server like Flask, and when it receives a request, it calls whatever code the script would have run. Once you've demonstrated this working, you can set up the same thing in local Docker containers, and if that works, make these two processes into two separate Kubernetes deployments.
In general with containers you can't "run commands" in other containers. In your proposed solution, containers don't typically run ssh daemons, and maintaining valid login credentials without compromising them is really hard. (If you put an ssh private key into a Docker image, anyone who has the image can trivially extract it. Kubernetes secrets make this a little bit better but not great.) The standard pattern for one container to invoke another is through a network path, generally either a direct HTTP request or through a queueing system like RabbitMQ.
A workaround for your problem is by doing the following steps:
Create a volume and share it between both containers (con1 and con2) of the pod.
In con1, create a cronjob that runs after <X> mins and read a file that is available on the shared volume.
So whenever con2 makes any changed in that file, con1 will perform any operation specified in the shared file via share volume.
I hope this helps.
You can use this script; https://gist.github.com/ahmetkotan/b592bf9919cfac7daa2e2e6c2e95772d
python kubernetes_api_exec.py <pod_name> container2 python < script_on_container1.py
or another commands. For example;
python kubernetes_api_exec.py <pod_name> container2 date
python kubernetes_api_exec.py <pod_name> container2 uname -a
Related
I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.
I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.
I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.
Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed.
I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.
I don't think you can have a Deployment which creates PODs from different Specs. You can't have it in Kubernetes and Helm won't help here (since Helm is just a template manager over Kubernetes configurations).
What you can do is to specify each Pod as a separate configuration (if single Pod, you don't necessarily need Deployment) and let Helm manage it.
Posting the solution I used since it could be useful for other people searching around.
In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.
StatefulSet can declare a property spec.serviceName which can have the same name of a headless service to create a unique network name for each Pod. Something like <pod_name>.<service_name>
Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.
Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.
This way, the StatefulSet is scalable at run time and works as expected.
hey I am not sure if this will exactly match your scenario but I think this is what you can try. Use a sidecar container to run the replica instances, A sidecar is a container which runs along with the main container and also shares the same namespace and can share volumes across each container.
Now to pass the different arguments to each container or sidecar, you will have to tweak the dockerfile or rather tweak the way your container starts.
Create a start.sh script file which accepts the arguments and starts the container with those arguments, the trick here is to accept the argument from environment variables thus allowing you to later configure these from configmaps or pod env.
So here is an example of php/laravel application running the same code and starting with different arguments. And the start.sh the file looks like this.
#!/bin/sh
if [ "${CONTAINER_ROLE}" = "queue" ];
then
echo "Running the queue..."
php artisan queue:work --queue=${QUEUENAME}
echo "Queue Started"
else
echo "Running Iceberg."
exec apache2-foreground
fi
So a sample dockerfile looks like this
FROM php:7.1.24-apache
COPY . /srv/myapp
...
...
RUN chown -R www-data:www-data /srv/app \
&& a2enmod remoteip && a2enmod rewrite
WORKDIR /srv/app
RUN chmod +x .docker/start.sh
CMD [ "sh",".docker/start.sh"]
Let me know how it goes.
Is there any approach that one container can call command in another container? The containers are in the same pod.
I need many command line tools which are shipped as image as well as in packages. But I don’t want to install all of them into one container because of some concerns.
This is very possible as long as you have k8s v1.17+. You must enable shareProcessNamespace: true and then all the container processes are available to other containers in the same pod.
Here are the docs, have a look.
In general, no, you can't do this in Kubernetes (or in plain Docker). You should either move the two interconnected things into the same container, or wrap some sort of network service around the thing you're trying to call (and then probably put it in a separate pod with a separate service in front of it).
There might be something you could do if you set up a service account, installed a Kubernetes API sidecar container, and used the Kubernetes API to do the equivalent of kubectl exec, but I'd consider this a solution of last resort.
Containers in pod are isolated from each other except that they share volume and network namespace. So you would not be able to execute command from one container into another. However, you could expose the commands in container through APIs
We are currently running EKS v1.20 and we were able to achieve this using the shareProcessNamespace: true that mr haven mention. In our particular case, we needed a debian 10 php container to execute a SAS binary command with arguments. SAS is installed and running in a centos 7 container in the same pod. Using helm, we enabled shareProcessNamespace and in the container's arguments and command fields we built symlinks to that binary using bash -c once the pod came online. We grabbed the pid of the shared container by using pgrep and since we know that the centos container's entry point is tail -f /dev/null so we just look for that process $(pgrep tail) initially.
- image: some_php_container
command: ["bash", "-c"]
args: [ "SAS_PROC_PID=$(pgrep tail) && \
ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8 /usr/bin/sas && \
ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS /usr/local/SAS && \
. /opt/script_runner.sh" ]
Now the php container is able to execute the sas command with arguments and process data files using the SAS software running on the centos container.
One issue we quickly found out is if the resulting SAS container happened to die in the pod, the pid would change and thus the symlinks would be broken on the php container. So we just put in liveness probe to frequently check to see if the path to binary using current pid exist, if the probe fails, it restarts the php container and thus rebuilding the symlinks with the right pid.
livenessProbe:
exec:
command:
- bash
- -c
- SAS_PROC_PID=$(pgrep tail)
- test -f /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 1
Hopefully above info can help someone else.
You can do this without shareProcessNamespace by using a shared volume and some named pipes. It manages all the I/O for you and is trivially simple and extremely fast.
For a complete description and code, see this solution I created. Contains examples.
Is there any way I can exec into the container, then edit some code (ex: add some log, edit come configuration file, etc) and restart the container to see what happens?
I tried to search for this but found nothing helpful.
The point is, I want to do a quick debug, not to do a full cluster deployment.
Some programs (like ie. nginx) support configuration reload without restarting their process, with these you can just kubectl exec change config and send a signal to master process (ie. kubectl exec <nginx_pod> kill -HUP 1). It is a feature of the software though, so many will not take that into account.
Containers are immutable by design, so they restart with a clean state each time. That said, with no simple way of doing this, there are hackish ways to achieve it.
One I can think of involves modifying the image on the node that will then restart the container. If you can ssh into the node and access docker directly, you can identify the container with a modified file and commit these changes with docker commit under the same tag. At that point your local container with that tag has your changes baked in so if you restart it (not reschedule, as it could start on different node), it will come up with your changes (assuming you do not use pullPolicy: always).
Again, not the way it's meant to be used, but achievable.
Any changes to the local container file system will be lost if you restart the pod You would need to work out whether the application stack you are using can perform an internal restart without actually exiting.
What language/application stack are you using?
You should at least consider an hostPath volume, in order to share local files on your host with your Kubernetes instance, in order to be able to do that kind of test.
After that, it is up to your application running within your pod to detect the file change, and restart if needed (ie, this is not specific to Kubernetes at all)
You could put any configuration in a configmap then just apply that, obviously assuming what reads the configmap would re-read it.
Same issue i have faced in my container as well
I have done the below steps in kubernate conatiner, it worked.
logged into pod eg:
kubectl exec --stdin --tty nginx-6799fc88d8-mlvqx -- /bin/bash
Once i logged in to application pod ran the below commands
#apt-get update
#apt-get install vim
Now am able to use vim editor in kubernate container.
I wonder how would one implement a colocated auxiliary container in a Pod within a Deployment which does not provide a service but rather a job/batch workload?
Background of my questions is, that I want to deploy a scalable service at which each instance needs configuration after its start. This configuration is done via a HTTP POST to its local colocated service instance. I've implemented a auxiliary container for this in order to benefit from the feature of colocation. So the auxiliary container always knows which instance needs to be configured.
Problem is, that the restartPolicy needs to be defined at the Pod level. I am looking for something like restart policy always for the service and a different restart policy onFailurefor the configuration job.
I know that k8s provides the Job resource for such workloads. But is there an option to colocate those jobs to Pods?
Furthermore I've stumbled across the so called init containers which might be defined via annotations. But these suffer the drawback, that k8s ensures that the actual Pod is only started after the init container did run. So for my very scenario it seems unsuitable.
As I understand you need your service running to configure it.
Your solution is workable and you can set restartPolicy: always you just need a way to tell your one off configuration container that it already ran. You could create and attach an emptyDir volume to your configuration container, create a file on it to mark your configuration successful and check for this file from your process. After your initialization you enter sleep in a loop. The downside is that some resources will be taken up by that container too.
Or you can just add an extra process in the same container and do the configuration (maybe with the file mentioned above as a guard to avoid configuring twice). So write a simple shell script like this and run it instead of your main process:
#!/bin/sh
(
[ -f /mnt/guard-vol/stamp ] && exit 0
/opt/my-config-process parameters && touch /mnt/guard-vol/stamp
) &
exec /opt/my-main-process "$#"
Alternatively you could implement a separate pod that queries the kubernetes API for pods of your service with label configured=false. Configure it and remove the label with the API. You should also modify your Service to select configured=true pods.
I created a Mongodb service according to the Kubernetes tutorial.
Now my question is how do I gain access to the database itself, with a client like Robomongo or similar clients? Just for making backups or exploring what data have been entered.
The mongo-pod and service only have an internal endpoint, and a single mount.
Is there any way to safely access this instance with no public endpoint?
Internally URI is mongo:27***
You can use kubectl port-forward mypod 27017:27017 and then just connect your mongodb client to localhost:27017.
If you want to stop, just hit Ctrl+C on the same cmd window to stop the process.
The kubernetes cmd-line tool provides this functionality as #ainlolcat stated
kubectl get pods
Retrieves the pod names currently running and with:
kubectl exec -i mongo-controller-* bash
you get a basic bash, which lets you execute
mongo
to get into the database to create dumps, and so on. The bash is very basic and has no features like completion and so on. I have not found a solution for better shell but it does the job
when you create a service in kubernetes you give it a name, say for example "mymongo". After the service is created then
The DNS service of kubernetes (by default is on) will ensure that any pod can discover this servixe simply by its name. so you can set your uri like
uri: mongodb://**mymongo**:27017/mong
In addition the service IP and port will be set as environment variables at the running pod.
MYMONGO_SERVICE_HOST
MYMONGO_SERVICE_PORT
I have in fact wrote a blog that show a step by step example of an app with nodejs web server and mongo that can explain further
http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/
feedback welcome!
Answer from #grchallenge is correct but it is deprecated as of in 2021
All new comers please use
kubectl exec mongo-pod-name -i -- bash