I have my ssh private key (/home/user/.ssh/id_rsa) as a volume mounted secret in my container. Kubernetes seems to mount it with uid 0. However, my app runs as a specific user, and therefore can't access the ssh private key whose permission must be 600 at min. How can I change the ownership of my private key to reflect that of a specific user?
thanks.
In Linux, usernames are mapped to a user id which can be seen with the command id -u someusername.
SSH requires by default in many cases that your SSH key be owned by the user running SSH and be hidden to others 600
Therefore, I highly recommend you copy your key instead of mounting it, unless your container user has the same user id as you.
If you are using a linux container, you can run the command inside the container to get the exact user id, and then chown your files with the user id instead of a user name.
kubectl exec -it mypod bash or sh if bash doesn't work
$ id -u someuser
OR
kubectl exec -it mypod id -u if your container has one user which started the main process
THEN
Copy your id file so you can chown it without interfering with your ability to ssh.
mkdir -p /data/secrets/myapp
cp /home/user/.ssh/id_rsa /data/secrets/myapp/id_rsa
chown $MYAPPUSERID:$MYAPPUSERID /data/secrets/myapp/id_rsa
chmod 600 /data/secrets/myapp/id_rsa
Because the host OS might have already mapped this user id, it may seem that your files are owned by another arbitrary user, but what ultimately matters is the user id of the owner/group.
Related
In my docker-compose.yml, I'm using the following to mount SSL certs into my container:
- ./certs:/var/lib/postgresql/certs
The ./certs folder and everything within it is owned by root locally.
However, upon starting the container, I receive:
2022-08-26 20:04:40.623 UTC [1] FATAL: could not load server certificate file "/var/lib/postgresql/certs/db.crt": Permission denied
Updating the permissions locally to anything else (777,755, etc..) results in a separate error:
FATAL: private key file "/var/lib/postgresql/certs/postgresdb.key" has group or world access
I realize I can copy the certs via my Dockerfile, but I'd rather not have to build a new image each time I want to change certificates. What is the best way to go about handling this?
Change the ownership of the certs to the user that's used inside the container, before you start the container.
You need to double-check the id of the user, since you didn't show what image you run. Below is an example.
sudo chmod -R 400 ./certs
sudo chown -R 5432:5432 ./certs
Alternatively, you can run the container with your local user ID. I only recommend this for development purpose.
docker run --user "$(id -u)" postgres
In that case, also make sure your local user has permissions on the certs dir.
I'm using postgres:13.1-alpine image for a docker container. I tried to make a backup of the volume using the following command:
docker run --rm \
--volume [DOCKER_COMPOSE_PREFIX]_[VOLUME_NAME]:/[TEMPORARY_DIRECTORY_STORING_EXTRACTED_BACKUP] \
--volume $(pwd):/[TEMPORARY_DIRECTORY_TO_STORE_BACKUP_FILE] \
ubuntu \
tar xvf /[TEMPORARY_DIRECTORY_TO_STORE_BACKUP_FILE]/[BACKUP_FILENAME].tar -C /[TEMPORARY_DIRECTORY_STORING_EXTRACTED_BACKUP] --strip 1
Something went wrong and now I can't access the database. I used to access it using the user/role myuser. But it seems to not exists anymore.
What I tried
I still can access the container using docker exec -it postgres sh. But I can't start psql because neither root, postgres or myuser roles exists.
All solutions I have found so far are basically the same: or use postgres user to create another user or use root user to create the role "postgres". This solutions doesn't work.
Most likely your database is toast. It is hard to see how an innocent backing-up accident could leave you with no predictable users, but an otherwise intact database. But it is at least plausible that you or docker just blew away your database entirely, then created a new one with a user whose name you wouldn't immediately guess. So find the size of the data directory, is that most plausible for the database you hope to find, or with one newly created from scratch?
I would run strings DATADIR/global/1260 and see if it finds anything recognizable as a user name in there, then you could try logging in as that.
Or, you could shutdown the database and restart it in single-user mode, /path/to/bin/postgres --single -D /path/to/DATADIR and take a look at pg_authid to see what is in there.
I want to run postgres inside a Docker container with a mounted volume. I am following steps as describe here. However, the container never starts. I think this is because the /var/lib/postgresql/data directory is owned by user postgres with uid 999, and group postgres with gid 999.
My understanding is that I need to create a user and group with the same uid and gid on my host (the name doesn't matter), and assign these permissions to the directory I am mounting on my host.
The problem is that the uid and gid are already taken on my host. I can rebuild the Docker image from the Dockerfile and modify the uid and gid values, but I don't think this is a good long term solution as I want to be able to use the official postgres images from Docker Hub.
My question is, if a container defines permissions that already exist on the host, how do you map permission from the host to the container without having to rebuild the container itself with the configuration from your environment?
If I am misunderstanding things or am way off the mark, what is the right way to get around this problem?
You are right about /var/lib/postgresql/data. When you run the container it changes, in the container, the owner of the files in that directory to user postgres (with user id 999). If the files are already present in the mounted volume, changing the ownership may fail if the user you run docker with does not have the right permissions. There is an excellent explanation about file ownership in docker here Understanding user file ownership in docker.
My question is, if a container defines permissions that already exist on the host, how do you map permission from the host to the container without having to rebuild the container itself with the configuration from your environment?
I think what you might be looking for is docker user namespaces. Introduction to User Namespaces in Docker Engine. It allows you to fix permissions in docker volumes.
For your specific case if don't want the files in the mounted volume to have uid 999 you could just override the entrypoint of the container and change the uid of the user postgres.
docker run --entrypoint="bash" postgres -c 'usermod -u 2006 postgres;exec /docker-entrypoint.sh postgres'
I'm using this Dockerfile to deploy it on openshift. - https://github.com/sclorg/postgresql-container/tree/master/9.5
It works fine, until I enabled ssl=on and injected the server.crt and server.key file into the postgres pod via volume mount option.
Secret is created like
$ oc secret new postgres-secrets \
server.key=postgres/server.key \
server.crt=postgres/server.crt \
root-ca.crt=ca-cert
The volume is created as bellow and attached to the given BuidlConfig of postgres.
$ oc volume dc/postgres \
--add --type=secret \
--secret-name=postgres-secrets \
--default-mode=0600 \
-m /var/lib/pgdata/data/secrets/secrets/
Problem is the mounted files of secret.crt and secret.key files is owned by root user, but postgres expect it should be owned by the postgres user. Because of that the postgres server won't come up and says this error.
waiting for server to start....FATAL: could not load server
certificate file "/var/lib/pgdata/data/secrets/secrets/server.crt":
Permission denied stopped waiting pg_ctl: could not start server
How we can insert a volume and update the uid:guid of the files in it ?
It looks like this is not trivial, as it requires to set Volume Security Context so all the containers in the pod are run as a certain user https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html
In the Kubernetes projects, this is something that is still under discussion https://github.com/kubernetes/kubernetes/issues/2630, but seems that you may have to use Security Contexts and PodSecurityPolicies in order to make it work.
I think the easiest option (without using the above) would be to use a container entrypoint that, before actually executing PostgreSQL, it chowns the files to the proper user (postgres in this case).
I have postgres docker image, which can be deployed on Bluemix Containers. It works fine. But when I attached volume, container fails with permission error.
I am using $PGDATA as /var/lib/postgresql/data.
Entry point script, I have mentioned, sudo chown -R postgres /var/lib/postgresql/data. Also I have mounted volume using option -v data1:/var/lib/postgresql/data
But when I start container, chown always fails with 'Permission Error'.
I have added postgres user as part of root group.
But it still gives me same error.
chown: changing ownership of ?/var/lib/postgresql/data?: Permission denied
How do I fix this issue?
I found a way arround adding postgress to root group (which is a security flaw in my eyes).
At first you make the volume writable for everyone, then add an folder in the volume with the user you want to run your daemon with (in your case postgres). After this you can reset the volumes access right to default again.
I use this snippet in my entrypoint scripts on setup time:
chsh -s /bin/bash www-data
chmod 777 /var/www
su -c "mkdir -p /var/www/html" www-data
chmod 755 /var/www
usermod -s /bin/false www-data
Instead of chown volume directory to postgres user, change its permission to allow group write:
$ chmod g+w $PGDATA
Since you already added root group to user postgres it should work now.