I am trying to open the terminal inside the container and execute the command.
When I use this: kubectl exec -it POD_NAME, I cannot connect, I see connection timeout.
Do you know other methods instead of kubectl exec to open a terminal inside the container?
Yes!
ssh into the Kubernetes node/machine where your container is running and run:
$ docker exec -it <container-name> sh
or if you have bash in the container
$ docker exec -it <container-name> bash
The fact that it's timing out means that you may have some other networking issues in your cluster like a firewall preventing access, your kube-apiserver not being accessible, or your network overlay not configured the way it's supposed to.
This is the best guide I know to understand how kubectl exec ... works under the hood, if you'd like to understand where things might be wrong.
Related
I was trying to get into kubernetes-dashboard Pod, but I keep getting this error:
C:\Users\USER>kubectl exec -n kubernetes-dashboard kubernetes-dashboard-66c887f759-bljtc -it -- sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
The Pod is running normally and I can access the Kubernetes UI via the browser. But I was getting some issues getting it running before, and I wanted to get inside the pod to run some commands, but I always get the same error mentioned above.
When I try the same command with a pod running nginx for example, it works:
C:\Users\USER>kubectl exec my-nginx -it -- sh
/ # ls
bin home proc sys
dev lib root tmp
docker-entrypoint.d media run usr
docker-entrypoint.sh mnt sbin var
etc opt srv
/ # exit
Any explanation, please?
Prefix the command to run with /bin so your updated command will look like:
kubectl exec -n kubernetes-dashboard <POD_NAME> -it -- /bin/sh
The reason you're getting that error is because Git in Windows slightly modifies the MSYS that changes command args. Generally using the command /bin/sh or /bash/bash works universally.
That error message means literally what it says: there is no sh or any other shell in the container. There's no particular requirement that a container have a shell, and if a Docker image is built FROM scratch (as the Kubernetes dashboard image is) or a "distroless" image, it just may not contain one.
In most cases you shouldn't need to "enter a container", and you should use kubectl exec (or docker exec) sparingly if at all. This is doubly true in Kubernetes: it's not just that changes you make manually will get lost when the container exits, but also that in Kubernetes you typically have multiple replicas that you can't manually edit all at once, and also that in some cases the cluster can delete and recreate a Pod outside of your control.
I am using the mssql docker image (Linux) for sql server 2019. The default user is not root but mssql.
I need to perform some operations as root inside the container:
docker exec -it sql bash
mssql#7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
Then I start the shell as root:
docker exec -it --user=root sql bash
root#7f5a78a63728:/# <command>
...
This works.
Now I need to do this in a container deployed in an AKS cluster
kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql#rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
as expected. But then:
kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
So when the container is in an AKS cluster, starting a shell as root doesn't work.
I then try to ssh into the node and use docker from inside:
kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root#aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
Looks like a Kubernetes cluster node doesn't have docker installed!
Any clues?
EDIT
The image I used locally and in Kubernetes is exactly the same,
mcr.microsoft.com/mssql/server:2019-latest untouched
David Maze has well mentioned in the comment:
Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?
Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at pod lifecycle.
Is it possible to run a linux command against a process which is running inside a kubernetes pod. Example: I want to grab heapdumps on a java process running inside a k8 pod. The pod comes with minimal installation and does not have that much disk space either, so I want to run jmap command from local machine (pointing to k8 cluster). Thanks.
As I have already mentioned in the comments, what you can use is the kubectl exec command:
Execute a command in a container.
Usage:
$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]
The kubectl exec command is a tool that allows you to inspect and debug your applications, by executing commands inside your containers.
If you need more details and examples regarding how to use it, I recommend these two guides:
Get a Shell to a Running Container: This page shows how to use kubectl exec to get a shell to a running container.
How does kubectl exec work?
kubectl exec did it. It allows to run any command inside the container. For example:
kc exec <POD_NAME> -- jmap -dump:live,format=b,file=heapdump.bin <pid>
I want to send a command from a container to another container.
Is it possible with SSH?
If you have another method, please let me know.
You don't even need SSH. The kubernetes API can be used to execute commands in other containers (and stream input/output).
If you are not into go you can use the kubectl exec command from your container that needs to execute a command in another one.
For both solutions you will need a properly setup ServiceAccount.
You can use kubectl exec to run commands in a container.
If you want to run the shell in a container you can use the command kubectl exec -it XXX -- /bin/bash ref. Then you can run a command like ping 8.8.8.8 i-e- to check the container connectivity.
To execute a command directly you can use kubectl exec -it XXX -- /bin/bash -c "command(s)" instead, reference.
I'm trying to run an interactive Pod (container) in Kubernetes that does not create a Job or Deployment and deletes itself after completing.
The purpose of the container is to give our developers an easy way to access our database, which doesn't have a public IP address.
Currently, we are using this command:
kubectl run -i --tty proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
which works the first time you run it, however, after exiting the session if you try to run the above again to connect to the database again, we get:
Error from server: jobs.extensions "proxy-pgclient" already exists
Forcing the developer to delete the job with:
kubectl delete job proxy-pgclient
before they can run the command and connect again.
Is there any way of starting up an interactive container (Pod) in Kubernetes without creating a Job or Deployment object and having that container be deleted when the interactive session is closed?
Adding the "--rm" flag to the original command resulted in the Job (and Pod) being deleted at the completion of the interactive session, which is what I was after. The command then becomes:
kubectl run -i --tty --rm proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
There isn't a short kubectl command that will do exactly what you want. Instead, you can create a yaml/json file with your pod description and run kubectl create -f pod.yaml. Your pod can be set to never restart, so it will terminate once it exits.