I want to send a command from a container to another container.
Is it possible with SSH?
If you have another method, please let me know.
You don't even need SSH. The kubernetes API can be used to execute commands in other containers (and stream input/output).
If you are not into go you can use the kubectl exec command from your container that needs to execute a command in another one.
For both solutions you will need a properly setup ServiceAccount.
You can use kubectl exec to run commands in a container.
If you want to run the shell in a container you can use the command kubectl exec -it XXX -- /bin/bash ref. Then you can run a command like ping 8.8.8.8 i-e- to check the container connectivity.
To execute a command directly you can use kubectl exec -it XXX -- /bin/bash -c "command(s)" instead, reference.
Related
I am using the mssql docker image (Linux) for sql server 2019. The default user is not root but mssql.
I need to perform some operations as root inside the container:
docker exec -it sql bash
mssql#7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
Then I start the shell as root:
docker exec -it --user=root sql bash
root#7f5a78a63728:/# <command>
...
This works.
Now I need to do this in a container deployed in an AKS cluster
kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql#rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
as expected. But then:
kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
So when the container is in an AKS cluster, starting a shell as root doesn't work.
I then try to ssh into the node and use docker from inside:
kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root#aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
Looks like a Kubernetes cluster node doesn't have docker installed!
Any clues?
EDIT
The image I used locally and in Kubernetes is exactly the same,
mcr.microsoft.com/mssql/server:2019-latest untouched
David Maze has well mentioned in the comment:
Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?
Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at pod lifecycle.
Is it possible to run a linux command against a process which is running inside a kubernetes pod. Example: I want to grab heapdumps on a java process running inside a k8 pod. The pod comes with minimal installation and does not have that much disk space either, so I want to run jmap command from local machine (pointing to k8 cluster). Thanks.
As I have already mentioned in the comments, what you can use is the kubectl exec command:
Execute a command in a container.
Usage:
$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]
The kubectl exec command is a tool that allows you to inspect and debug your applications, by executing commands inside your containers.
If you need more details and examples regarding how to use it, I recommend these two guides:
Get a Shell to a Running Container: This page shows how to use kubectl exec to get a shell to a running container.
How does kubectl exec work?
kubectl exec did it. It allows to run any command inside the container. For example:
kc exec <POD_NAME> -- jmap -dump:live,format=b,file=heapdump.bin <pid>
I am trying to open the terminal inside the container and execute the command.
When I use this: kubectl exec -it POD_NAME, I cannot connect, I see connection timeout.
Do you know other methods instead of kubectl exec to open a terminal inside the container?
Yes!
ssh into the Kubernetes node/machine where your container is running and run:
$ docker exec -it <container-name> sh
or if you have bash in the container
$ docker exec -it <container-name> bash
The fact that it's timing out means that you may have some other networking issues in your cluster like a firewall preventing access, your kube-apiserver not being accessible, or your network overlay not configured the way it's supposed to.
This is the best guide I know to understand how kubectl exec ... works under the hood, if you'd like to understand where things might be wrong.
Please before you comment or answer, this question is about a CLI program, not a service. Apparently 90% of Kubernetes has to do with running services, so there is sparse documentation for CLI programs meant to be part of a pipeline workflow.
I have a command line program that uses stdout for JSON results.
I have a docker image for the command line program.
If I create the container as a Kubernetes Job, than stdout and stderr are mixed and require heuristic scrubbing to get pure JSON out.
The stderr messages are from native libraries outside of my direct control.
Supposedly, if I run kubectl exec against a running pod, I will get the normal stdout/stderr pipes.
Is there a way to just have the pod running without an entrypoint (or some dummy service entrypoint) with the sole purpose of running kubectl exec against it?
Is there a way to just have the pod running without an entrypoint [...]?
A pod consists of one or more containers, each of which has an individual entrypoint. It is certainly possible to run a container with a dummy command, for example, you can build an image with:
CMD sleep inf
This will run a container that will persist until you kill it, and you could happily docker exec into it.
You can apply the same solution to k8s. You could build an image as described above and deploy that in a pod, or you could use an existing image and simply set the command, as in:
spec:
containers:
- name: mycontainer
image: myexistingimage
command: ["sleep", "inf"]
You can use kubectl as docker cli https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/
kubectl run just do the job. There is no need for a workaround.
Aditionally, you can attach I/O and disable automatic restart:
kubectl run -i -t busybox --image=busybox --restart=Never
I'm trying to run an interactive Pod (container) in Kubernetes that does not create a Job or Deployment and deletes itself after completing.
The purpose of the container is to give our developers an easy way to access our database, which doesn't have a public IP address.
Currently, we are using this command:
kubectl run -i --tty proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
which works the first time you run it, however, after exiting the session if you try to run the above again to connect to the database again, we get:
Error from server: jobs.extensions "proxy-pgclient" already exists
Forcing the developer to delete the job with:
kubectl delete job proxy-pgclient
before they can run the command and connect again.
Is there any way of starting up an interactive container (Pod) in Kubernetes without creating a Job or Deployment object and having that container be deleted when the interactive session is closed?
Adding the "--rm" flag to the original command resulted in the Job (and Pod) being deleted at the completion of the interactive session, which is what I was after. The command then becomes:
kubectl run -i --tty --rm proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
There isn't a short kubectl command that will do exactly what you want. Instead, you can create a yaml/json file with your pod description and run kubectl create -f pod.yaml. Your pod can be set to never restart, so it will terminate once it exits.