We're trying to use rancher desktop and nerdctl to bring up some compose stacks. The issue we've run into is that when you have a mount similar to the following, nerdctl expands either ${HOME} or ~/ to /root, despite me running the command as my normal user.
volumes:
- ${HOME}/.config/conf:/opt/confpath
FATA[0000] failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/root/.config/conf" to rootfs at "/opt/confpath" caused: stat /root/.config/conf: no such file or directory: unknown
FATA[0000] error while creating container u42-docs: exit status 1
Is there a way to prevent this path expansion to the root user, and instead be my own user?
Rancher desktop was installed via brew install --cask rancher
Related
I was trying to run postgres 12.0 alpine with arbitrary user in an attempt to have easier acces to the mounted drives. However, I get the following error. I was following the instructions from official docker hub here
docker run -it --user "$(id -u):$(id -g)" -v /etc/passwd:/etc/passwd:ro postgres:12.0-alipne
I get: initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
Then I tried initializing the target directory separately which needs a restart in between. This is also not working and gives me the same error. But this time, the container starts as a root user.
Has anyone had success running the postgres alpine container with an arbitrary user?
Getting following error while accessing the pod...
"OCI runtime exec failed: exec failed: container_linux.go:348:
starting container process caused "exec: \"/bin/bash\": stat
/bin/bash: no such file or directory": unknown command terminated with
exit code 126"
Tried with /bin/sh & /bin/bash
Terminated the node on which this pod is running and bring up the new node, but the result is same.
Also tried deleting the pod, but new pod also behaves in the similar way.
This is because the container you're trying to access doesn't have the /bin/bash executable.
If you really want to execute a shell in the container, you have to use a container image that includes a shell (e.g. /bin/sh or /bin/bash).
I've been trying to mount a volume to a docker container with clickhouse, specifically on docker desktop windows 10. Following the documentation:
https://hub.docker.com/r/yandex/clickhouse-server/
I have no problem setting up the docker container on my C drive which is in my $HOME path and loading data into etc. I want to now mount a custom volume, my E/ drive which is larger as the database will continue to grow. I am getting an error when I run this:
docker run -d -p 8123:8123 --name clickhousedb --ulimit nofile=262144:262144 --volume=/E:/ch/clickhousedb:/var/lib/clickhouse yandex/clickhouse-server
specifically this:
Error response from daemon: invalid mode: /var/lib/clickhouse.
Any ideas what might be the issue?
The issue is the "/" character right after " --volume=", which tells the docker CLI to split the string as:
empty string (directory to be mounted)
E:/ch/clickhousedb (mounting point inside the container)
/var/lib/clickhouse (mounting mode)
Docker thought "/var/lib/clickhouse" was the mode for the volume mount, hence the error message.
Seemed to be a permission issue. Was able to access the root of the E drive:
docker run -d -p 8134:8123 --name clickhousedb --ulimit nofile=262144:262144 --volume=E:/:/var/lib/clickhouse yandex/clickhouse-server
I'm using Minikube on Windows based machine. On the same machine, I also have docker-machine setup.
I've pointed docker client towards minikube’s docker environment. This way, can see Docker environment inside Kubernetes.
Without issues, I can build docker images & run docker containers from Minikube VM. However, when I try to start any docker container via kubectl(from PowerShell), its failing to start primarily as if kubectl can't find docker image due to following error -
Failed to pull image "image name": rpc error: code = Unknown desc =
Error response from daemon: repository "image-repo-name" not found:
does not exist or no pull access Error syncing pod
I don't know what's missing. If "docker run" can access the image why "kubectl" can not do?
Here my Dockerfile:
FROM node:4.4
EXPOSE 9002
COPY server.js .
CMD node server.js
Make sure your image path in your yaml is correct. That image should exist on your local machine. It should be named with a number not "latest"
Have this in your deployment yaml:
image: redis:1.0.48
run "> docker images" to see the list of images on your machine.
When I define multiple containers in a pod/pod template like one container running agent and another php-fpm, how can they access each other? I need the agent container to connect to php-fpm by shell and need to execute few steps interactively through agent container.
Based on my understanding, we can package kubectl into the agent container and use kubectl exec -it <container id> sh to connect to the container. But I don't want Agent container to have more privilege than to connect to the target container with is php-fpm.
Is there a better way for agent container to connect to php-fpm by a shell and execute commands interactively?
Also, I wasn't successful in running kubectl from a container when using minikube due to following errors
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
Error in configuration:
* unable to read client-cert /Users/user/.minikube/apiserver.crt for minikube due to open /Users/user/.minikube/apiserver.crt: no such file or directory
* unable to read client-key /Users/user/.minikube/apiserver.key for minikube due to open /Users/user/.minikube/apiserver.key: no such file or directory
* unable to read certificate-authority /Users/user/.minikube/ca.crt for minikube due to open /Users/user/.minikube/ca.crt: no such file or directory
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
First off, every Pod within a k8s cluster has its own k8s credentials provided by /var/run/secrets/kubernetes.io/serviceaccount/token, and thus there is absolutely no need to attempt to volume mount your home directory into a docker container
The reason you are getting the error about client-cert is because the contents of ~/.kube are merely strings that point to the externally defined ssl key, ssl certificate, and ssl CA certificate defined inside ~/.kube/config -- but I won't speak to fixing that problem further since there is no good reason to be using that approach