Kubernetes-Pod : Access issues - kubernetes

Getting following error while accessing the pod...
"OCI runtime exec failed: exec failed: container_linux.go:348:
starting container process caused "exec: \"/bin/bash\": stat
/bin/bash: no such file or directory": unknown command terminated with
exit code 126"
Tried with /bin/sh & /bin/bash
Terminated the node on which this pod is running and bring up the new node, but the result is same.
Also tried deleting the pod, but new pod also behaves in the similar way.

This is because the container you're trying to access doesn't have the /bin/bash executable.
If you really want to execute a shell in the container, you have to use a container image that includes a shell (e.g. /bin/sh or /bin/bash).

Related

Eventual failure: kubectl exec fails with "operation not permitted: unkown"

I have some Pods that are running some Python programs. Initially I'm able to execute simple commands into the Pods. However after some time (maybe hours?) I start to get the following error:
$ kubectl exec -it mypod -- bash
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "37a9f1042841590e48e1869f8b0ca13e64df02d25458783e74d8e8f2e33ad398": OCI runtime exec failed: exec failed: unable to start container pr
ocess: open /dev/pts/0: operation not permitted: unknown
If I restart the Pods, then this clears the condition. However, I'd like to figure out why this happening to avoid having to restart Pods each time.
The Pods are running a simple Python script, and the Python program is still running as normal (kubectl logs shows what I expect).
Also, I'm running K3s for Kubernetes across 4 nodes (1 master, 3 workers). I noticed all Pods running on certain nodes started to experience this issue. For example, initially I found all Pods running on worker2 and worker3 had this issue (but all Pods on worker1 did not). Eventually all Pods across all worker nodes start to have this problem. So it appears to be related to a condition on the node that is preventing exec from running. However restarting the Pods resets the condition.
As far as I can tell, the containers are running fine in containerd. I can log into the nodes and containerd shows the containers are running, I can check logs, etc...
What else should I check?
Why would the ability to exec stop working? (but containers are still running)
There is a couple of GitHub issues one or another from the middle of august. They said it was an SELinux issue and fixed in the runc v1.1.4. You should check your runc version and when it is below the mentioned version then update it.
Otherwise, you can disable SELinux when you aren't working in production:
setenforce 0
or when you want some more sophisticated solution, try this: https://github.com/moby/moby/issues/43969#issuecomment-1217629129

Why I can't get into the container running "kubernetes-dashboard"?

I was trying to get into kubernetes-dashboard Pod, but I keep getting this error:
C:\Users\USER>kubectl exec -n kubernetes-dashboard kubernetes-dashboard-66c887f759-bljtc -it -- sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
The Pod is running normally and I can access the Kubernetes UI via the browser. But I was getting some issues getting it running before, and I wanted to get inside the pod to run some commands, but I always get the same error mentioned above.
When I try the same command with a pod running nginx for example, it works:
C:\Users\USER>kubectl exec my-nginx -it -- sh
/ # ls
bin home proc sys
dev lib root tmp
docker-entrypoint.d media run usr
docker-entrypoint.sh mnt sbin var
etc opt srv
/ # exit
Any explanation, please?
Prefix the command to run with /bin so your updated command will look like:
kubectl exec -n kubernetes-dashboard <POD_NAME> -it -- /bin/sh
The reason you're getting that error is because Git in Windows slightly modifies the MSYS that changes command args. Generally using the command /bin/sh or /bash/bash works universally.
That error message means literally what it says: there is no sh or any other shell in the container. There's no particular requirement that a container have a shell, and if a Docker image is built FROM scratch (as the Kubernetes dashboard image is) or a "distroless" image, it just may not contain one.
In most cases you shouldn't need to "enter a container", and you should use kubectl exec (or docker exec) sparingly if at all. This is doubly true in Kubernetes: it's not just that changes you make manually will get lost when the container exits, but also that in Kubernetes you typically have multiple replicas that you can't manually edit all at once, and also that in some cases the cluster can delete and recreate a Pod outside of your control.

Path expansion for volume overlays uses root with Rancher Desktop (OSX)

We're trying to use rancher desktop and nerdctl to bring up some compose stacks. The issue we've run into is that when you have a mount similar to the following, nerdctl expands either ${HOME} or ~/ to /root, despite me running the command as my normal user.
volumes:
- ${HOME}/.config/conf:/opt/confpath
FATA[0000] failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/root/.config/conf" to rootfs at "/opt/confpath" caused: stat /root/.config/conf: no such file or directory: unknown
FATA[0000] error while creating container u42-docs: exit status 1
Is there a way to prevent this path expansion to the root user, and instead be my own user?
Rancher desktop was installed via brew install --cask rancher

How to customize a offical docker postgres image and build it?

I need to change the locale of the offical Postgres(11.4) image in order to create databases with my language.
https://github.com/docker-library/postgres/blob/87b15b6c65ba985ac958e7b35ba787422113066e/11/Dockerfile
I copied the Dockerfile and docker-entrypoint.sh from offical postgres image( I did not add the customization yet)
aek#ubuntu:~/Desktop/Docker$ ls
docker-entrypoint.sh Dockerfile
aek#ubuntu:~/Desktop/Docker$ sudo docker build -t postgres_custom .
Step 24/24 : CMD ["postgres"]
---> Running in 8720b67094b1
Removing intermediate container 8720b67094b1
---> eb63a36ee850
Successfully built eb63a36ee850
Successfully tagged postgres_custom:latest
Image is successfully built but when I try to run it I get error below:
aek#ubuntu:~/Desktop/Docker$ docker run --name postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres_custom
d75b25367f019e3398f7daff78260e87c02a0c1898658585ec04bbd219bbe3e9
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
Can't figure out what is wrong with the entrypoint.sh. Can you please help me?
you need to make entrypoint.sh executable :
RUN chmod +x /path/to/entrypoint.sh
as you said that you copied it without any further changes.

API error (500): Container command not found or does not exist

kubectl describe pods
logs
pods logs
logs
command:
- bundle exec unicorn -c config/unicorn/production.rb -E production
The container can't start on k8s and some errors occured .
But when I exec
docker run -d image [CMD]
The container works well.
"command" is an array, so each argument has to be a separate element, not all on one line
For anyone else running into this problem:
make sure the gems (including unicorn) are actually installed in the volume used by the container. If not, do a bundle install.
Another reason for this kind of error could be that the directory specified under working_dir (in the docker-compose.yml) does not exist (see Misleading error message "ERROR: Container command not found or does not exist.").