How can I run a Kubernetes pod with the sole purpose of running exec against it? - kubernetes

Please before you comment or answer, this question is about a CLI program, not a service. Apparently 90% of Kubernetes has to do with running services, so there is sparse documentation for CLI programs meant to be part of a pipeline workflow.
I have a command line program that uses stdout for JSON results.
I have a docker image for the command line program.
If I create the container as a Kubernetes Job, than stdout and stderr are mixed and require heuristic scrubbing to get pure JSON out.
The stderr messages are from native libraries outside of my direct control.
Supposedly, if I run kubectl exec against a running pod, I will get the normal stdout/stderr pipes.
Is there a way to just have the pod running without an entrypoint (or some dummy service entrypoint) with the sole purpose of running kubectl exec against it?

Is there a way to just have the pod running without an entrypoint [...]?
A pod consists of one or more containers, each of which has an individual entrypoint. It is certainly possible to run a container with a dummy command, for example, you can build an image with:
CMD sleep inf
This will run a container that will persist until you kill it, and you could happily docker exec into it.
You can apply the same solution to k8s. You could build an image as described above and deploy that in a pod, or you could use an existing image and simply set the command, as in:
spec:
containers:
- name: mycontainer
image: myexistingimage
command: ["sleep", "inf"]

You can use kubectl as docker cli https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/
kubectl run just do the job. There is no need for a workaround.
Aditionally, you can attach I/O and disable automatic restart:
kubectl run -i -t busybox --image=busybox --restart=Never

Related

Why I can't get into the container running "kubernetes-dashboard"?

I was trying to get into kubernetes-dashboard Pod, but I keep getting this error:
C:\Users\USER>kubectl exec -n kubernetes-dashboard kubernetes-dashboard-66c887f759-bljtc -it -- sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
The Pod is running normally and I can access the Kubernetes UI via the browser. But I was getting some issues getting it running before, and I wanted to get inside the pod to run some commands, but I always get the same error mentioned above.
When I try the same command with a pod running nginx for example, it works:
C:\Users\USER>kubectl exec my-nginx -it -- sh
/ # ls
bin home proc sys
dev lib root tmp
docker-entrypoint.d media run usr
docker-entrypoint.sh mnt sbin var
etc opt srv
/ # exit
Any explanation, please?
Prefix the command to run with /bin so your updated command will look like:
kubectl exec -n kubernetes-dashboard <POD_NAME> -it -- /bin/sh
The reason you're getting that error is because Git in Windows slightly modifies the MSYS that changes command args. Generally using the command /bin/sh or /bash/bash works universally.
That error message means literally what it says: there is no sh or any other shell in the container. There's no particular requirement that a container have a shell, and if a Docker image is built FROM scratch (as the Kubernetes dashboard image is) or a "distroless" image, it just may not contain one.
In most cases you shouldn't need to "enter a container", and you should use kubectl exec (or docker exec) sparingly if at all. This is doubly true in Kubernetes: it's not just that changes you make manually will get lost when the container exits, but also that in Kubernetes you typically have multiple replicas that you can't manually edit all at once, and also that in some cases the cluster can delete and recreate a Pod outside of your control.

Start interactive shell into a sql server 2019 container running in an aks pod

I am using the mssql docker image (Linux) for sql server 2019. The default user is not root but mssql.
I need to perform some operations as root inside the container:
docker exec -it sql bash
mssql#7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
Then I start the shell as root:
docker exec -it --user=root sql bash
root#7f5a78a63728:/# <command>
...
This works.
Now I need to do this in a container deployed in an AKS cluster
kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql#rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
as expected. But then:
kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
So when the container is in an AKS cluster, starting a shell as root doesn't work.
I then try to ssh into the node and use docker from inside:
kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root#aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
Looks like a Kubernetes cluster node doesn't have docker installed!
Any clues?
EDIT
The image I used locally and in Kubernetes is exactly the same,
mcr.microsoft.com/mssql/server:2019-latest untouched
David Maze has well mentioned in the comment:
Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?
Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at pod lifecycle.

How can I keep a Pod from crashing so that I can debug it?

In Kubernetes, when a Pod repeatedly crashes and is in CrashLoopBackOff status, it is not possible to shell into the container and poke around to find the problem, due to the fact that containers (unlike VMs) live only as long as the primary process. If I shell into a container and the Pod is restarted, I'm kicked out of the shell.
How can I keep a Pod from crashing so that I can investigate if my primary process is failing to boot properly?
Redefine the command
In development only, a temporary hack to keep a Kubernetes pod from crashing is to redefine it and specify the container's command (corresponding to a Docker ENTRYPOINT) and args to be a command that will not crash. For instance:
containers:
- name: something
image: some-image
# `shell -c` evaluates a string as shell input
command: [ "sh", "-c"]
# loop forever, outputting "yo" every 5 seconds
args: ["while true; do echo 'yo' && sleep 5; done;"]
This allows the container to run and gives you a chance to shell into it, like kubectl exec -it pod/some-pod -- sh, and investigate what may be wrong.
This needs to be undone after debugging so that the container will run the command it's actually meant to run.
Adapted from this blog post.
There are also other methods used for debugging pods that are worth noting in your use case scenario:
If your container has previously crashed, you can access the previous container's crash log with: kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images. kubectl has an alpha command that can create ephemeral containers for debugging beginning with version v1.18. An example for this method can be found here.
in my case I did build using mac m1/silicon. In this case pod crashes and there is no explicit message about this.
The problem was that I a also debugged using docker on the same m1 so could not really see what is wrong.
I would need to build image using docker build --platform linux/amd64.

running a linux command against a pid inside k8 pod

Is it possible to run a linux command against a process which is running inside a kubernetes pod. Example: I want to grab heapdumps on a java process running inside a k8 pod. The pod comes with minimal installation and does not have that much disk space either, so I want to run jmap command from local machine (pointing to k8 cluster). Thanks.
As I have already mentioned in the comments, what you can use is the kubectl exec command:
Execute a command in a container.
Usage:
$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]
The kubectl exec command is a tool that allows you to inspect and debug your applications, by executing commands inside your containers.
If you need more details and examples regarding how to use it, I recommend these two guides:
Get a Shell to a Running Container: This page shows how to use kubectl exec to get a shell to a running container.
How does kubectl exec work?
kubectl exec did it. It allows to run any command inside the container. For example:
kc exec <POD_NAME> -- jmap -dump:live,format=b,file=heapdump.bin <pid>

About kubernetes. How can I Cooperative between containers?

I want to send a command from a container to another container.
Is it possible with SSH?
If you have another method, please let me know.
You don't even need SSH. The kubernetes API can be used to execute commands in other containers (and stream input/output).
If you are not into go you can use the kubectl exec command from your container that needs to execute a command in another one.
For both solutions you will need a properly setup ServiceAccount.
You can use kubectl exec to run commands in a container.
If you want to run the shell in a container you can use the command kubectl exec -it XXX -- /bin/bash ref. Then you can run a command like ping 8.8.8.8 i-e- to check the container connectivity.
To execute a command directly you can use kubectl exec -it XXX -- /bin/bash -c "command(s)" instead, reference.