I have used "bitnami/kubectl:latest" image to my test-c container which runs inside a test pod. I just login to that container and I wanted to create a file inside that container. But ended up with below error.
I have no name!#coredns-hook-c92g7:/$ touch test
touch: cannot touch 'test': Permission denied
I have no name!#coredns-hook-c92g7:/$ mkdir test
mkdir: cannot create directory 'test': Permission denied
Can someone help me to understand why this problem occurs and how to fix this? I am aware that mounting the file as configmap might help me but just I need to understand this issue. Thanks in advance!
The issue here is that the docker image which you are using is configured to run its final instruction using a non-root user (USER 1001 in this case).
Have a look at the Dockerfile instruction:
https://github.com/bitnami/bitnami-docker-kubectl/blob/master/1.24/debian-11/Dockerfile#L24
So you can either
create files in a non-root user owned directory like /tmp or
create your own docker image after removing that USER 1001 instruction from the Dockerfile and host it in your own repository which can then be pulled into your cluster.
Whatever works for you.
Hope this helps!
Related
In my docker-compose.yml, I'm using the following to mount SSL certs into my container:
- ./certs:/var/lib/postgresql/certs
The ./certs folder and everything within it is owned by root locally.
However, upon starting the container, I receive:
2022-08-26 20:04:40.623 UTC [1] FATAL: could not load server certificate file "/var/lib/postgresql/certs/db.crt": Permission denied
Updating the permissions locally to anything else (777,755, etc..) results in a separate error:
FATAL: private key file "/var/lib/postgresql/certs/postgresdb.key" has group or world access
I realize I can copy the certs via my Dockerfile, but I'd rather not have to build a new image each time I want to change certificates. What is the best way to go about handling this?
Change the ownership of the certs to the user that's used inside the container, before you start the container.
You need to double-check the id of the user, since you didn't show what image you run. Below is an example.
sudo chmod -R 400 ./certs
sudo chown -R 5432:5432 ./certs
Alternatively, you can run the container with your local user ID. I only recommend this for development purpose.
docker run --user "$(id -u)" postgres
In that case, also make sure your local user has permissions on the certs dir.
I try to deploy mongodb with helm and it gives this error:
mkdir: cannot create directory /bitnami/mongodb/data : permision denied.
I also tried this solution:
sudo chown -R 1001 /tmp/mongo
but it says no this directory.
You have permission denied on /bitnami/mongodb/data and you are trying to modify another path: /tmp/mongo. It is possible that you do not have such a directory at all.
You need to change the owner of the resource for which you don't have permissions, not random (non-related) paths :)
You've probably seen this github issue and this answer:
You are getting that error message because the container can't mount the /tmp/mongo directory you specified in the docker-compose.yml file.
As you can see in our changelog, the container was migrated to the non-root user approach, that means that the user 1001 needs read/write permissions in the /tmp/mongo folder so it can be mounted and used. Can you modify the permissions in your local folder and try to launch the container again?
sudo chown -R 1001 /tmp/mongo
This method will work if you are going to mount the /tmp/mongo folder, which is actually not quite a common behavior. Look for another answer:
Please note that mounting host path volumes is not the usual way to work with these containers. If using docker-compose, it would be using docker volumes (which already handle the permission issue), the same would apply with Kubernetes and the MongoDB helm chart, which would use the securityContext section to ensure the proper permissions.
In your situation, you'll just have change owner to the path /bitnami/mongodb/data or to use Security Context on your Helm chart and everything should work out for you.
Probably here you can find the most interesting part with example context:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
I am trying to use the data directory from a preexisting database & bring up a new postgres docker container (same version 9.5) with its '/var/lib/postgresql/data' bind mounted to the data directory.
I find that even though i am able to bring up the container & use psql within the container to connect to it, external connections fail with invalid password. This despite me setting the POSTGRES_USER, POSTGRES_DB & POSTGRES_PASSWORD environment variables.
Is there a way to make this work? I also tried this method but ran into permission error,
"sh: 1: /usr/local/bin/init.sh: Permission denied"
Thanks
This happends when your user/group id does not match the file owner. You should run your docker with --user
please have a look to Arbitrary --user Notes of https://hub.docker.com/_/postgres
Hope that will help you to fix your problem.
For composer look at https://docs.docker.com/compose/reference/run/
OK i figured out the way to do this & it turns out to be very simple. All i did was,
Add a script & copy it into docker-entrypoint-initdb.d in my Dockerfile
In the script i had a loop that was waiting for the db to be up & running before resetting the superuser password & privileges.
Thanks
Saving Data in kubernetes is not persistant. so we should use volume.
Forexample we can mount "/apt" to save data in "apt".
Now I want to mount "/" but I get this error.
Error: Error response from daemon: invalid volume specification:
'/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/':
invalid mount config for type "bind": invalid specification:
destination can't be '/'
The question is How can I mount "/" in kubernetes?
Not completely sure about your environment, but I ran into this issue today because I wanted to be able to browse the entire root filesystem of a container via SSH (WinSCP) to the host. I am using Docker in a Photon OS VM environment. The answer I've come to is: you can't do what you're trying to do, but you may be able to accomplish what you're trying to accomplish. Let's say I created a volume called mysql and I create a new (oversimplified) mysql container using that volume as root:
docker volume create --name mysql
docker run -d --name=mysqldb -v /var/lib/docker/volumes/mysql:/ mysql:5.7
Docker will cry and say I can't mount to root (destination can't be '/'). However, since I know the location where our volumes live (/var/lib/docker/volumes/) then we can simply create our container as normal and an arbitrarily-named volume will be placed in that folder. So if your goal is (as mine was) to be able to ssh to the host and browse the files in the root of your container, you CAN do that, you just need to go to the correct arbitrarily-named volume. In my case it is "/var/lib/docker/volumes/12dccb66f2eeaeefe8e1feabb86f3c6def87b091dabeccad2902851caa97f04c/_data", which isn't as pretty as "/var/lib/docker/volumes/mysql", but it gets the job done.
Hope that helps someone.
I want to run postgres inside a Docker container with a mounted volume. I am following steps as describe here. However, the container never starts. I think this is because the /var/lib/postgresql/data directory is owned by user postgres with uid 999, and group postgres with gid 999.
My understanding is that I need to create a user and group with the same uid and gid on my host (the name doesn't matter), and assign these permissions to the directory I am mounting on my host.
The problem is that the uid and gid are already taken on my host. I can rebuild the Docker image from the Dockerfile and modify the uid and gid values, but I don't think this is a good long term solution as I want to be able to use the official postgres images from Docker Hub.
My question is, if a container defines permissions that already exist on the host, how do you map permission from the host to the container without having to rebuild the container itself with the configuration from your environment?
If I am misunderstanding things or am way off the mark, what is the right way to get around this problem?
You are right about /var/lib/postgresql/data. When you run the container it changes, in the container, the owner of the files in that directory to user postgres (with user id 999). If the files are already present in the mounted volume, changing the ownership may fail if the user you run docker with does not have the right permissions. There is an excellent explanation about file ownership in docker here Understanding user file ownership in docker.
My question is, if a container defines permissions that already exist on the host, how do you map permission from the host to the container without having to rebuild the container itself with the configuration from your environment?
I think what you might be looking for is docker user namespaces. Introduction to User Namespaces in Docker Engine. It allows you to fix permissions in docker volumes.
For your specific case if don't want the files in the mounted volume to have uid 999 you could just override the entrypoint of the container and change the uid of the user postgres.
docker run --entrypoint="bash" postgres -c 'usermod -u 2006 postgres;exec /docker-entrypoint.sh postgres'