Weird permissions podman docker-compose volume - docker-compose

I have specified docker-compose.yml file with some volumes to mount. Here is example:
backend-move:
container_name: backend-move
environment:
APP_ENV: prod
image: backend-move:latest
logging:
options:
max-size: 250m
ports:
- 8080:8080
tty: true
volumes:
- php_static_logos:/app/public/images/logos
- ./volumes/nginx-php/robots.txt:/var/www/html/public/robots.txt
- ./volumes/backend/mysql:/app/mysql
- ./volumes/backend/httpd/welcome.conf:/etc/httpd/conf.d/welcome.conf
After I run podman-compose up -d and go to container through docker exec -it backend-move bash
I have this crazy permissions (??????????) on mounted files:
bash-4.4$ ls -la
ls: cannot access 'welcome.conf': Permission denied
total 28
drwxrwxrwx. 2 root root 114 Apr 21 12:29 .
drwxrwxrwx. 5 root root 105 Apr 21 12:29 ..
-rwxrwxrwx. 1 root root 400 Mar 21 17:33 README
-rwxrwxrwx. 1 root root 2926 Mar 21 17:33 autoindex.conf
-rwxrwxrwx. 1 root root 1517 Apr 21 12:29 php.conf
-rwxrwxrwx. 1 root root 8712 Apr 21 12:29 ssl.conf
-rwxrwxrwx. 1 root root 1252 Mar 21 17:27 userdir.conf
-?????????? ? ? ? ? ? welcome.conf
Any suggestions?
[root#45 /]# podman-compose --version
['podman', '--version', '']
using podman version: 3.4.2
podman-composer version 1.0.3
podman --version
podman version 3.4.2

facing the exact same issue, although on macos with the podman machine, since the parent dir has been mounted on the podman-machine, I do get the write permissions.
Although, on linux, it just fails as in your example.
To fix my issue, I had to add:
privileged: true

Related

Permission denied when attempting to edit docker container files

I'm working on a Dockerized project which has an Adminer container. I need to increase the value of post_max_size found in /usr/local/etc/php/conf.d/0-upload_large_dumps.ini.
My problem is any attempt to edit the file results in permission denied responses. Usually, this would be a prolem resolved by using sudo but I also get permission denied from that as well.
The following is the output of the directory I'm trying to edit showing the target file is owned by root:
/var/www/html $ cd /usr/local/etc/php/conf.d/
/usr/local/etc/php/conf.d $ ls -l
total 24
-rw-r--r-- 1 root root 113 Nov 18 22:10 0-upload_large_dumps.ini
-rw-r--r-- 1 root root 23 Nov 18 22:11 docker-php-ext-pdo_dblib.ini
-rw-r--r-- 1 root root 23 Nov 18 22:10 docker-php-ext-pdo_mysql.ini
-rw-r--r-- 1 root root 22 Nov 18 22:11 docker-php-ext-pdo_odbc.ini
-rw-r--r-- 1 root root 23 Nov 18 22:11 docker-php-ext-pdo_pgsql.ini
-rw-r--r-- 1 root root 17 Nov 18 17:03 docker-php-ext-sodium.ini
And the adminer section of docker-compose is as follows:
adminer:
image: adminer
restart: always
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:adminer
How can I edit docker-compose so I have permissions to update the files?
There is nothing to change in your docker-compose.yaml.
If you want to change it, you can just exec in the container as the root user.
So I suppose that, right now, you are doing
docker compose exec adminer ash
And then, you are trying to edit those file.
What you can do, instead, is:
docker compose exec --user root adminer ash
So you will be able to adapt those files owned by root.
This said, mind that the philosophy of Docker is that a container should be short lived, so you would be better having your own Dockerfile to edit that configuration file for good. Another way to do it would be to mount a file over the existing one to change the configurations.
Example of adaptation in a Dockerfile:
FROM adminer
COPY ./0-upload_large_dumps.ini \
/usr/local/etc/php/conf.d/0-upload_large_dumps.ini
## ^-- copies a files from the folder where you build
## in order to override the existing configuration
Then in your docker-compose.yml:
adminer:
build: .
image: your-own-namespace/adminer
restart: always
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:adminer
Example of mounting a file to override the configuration file:
adminer:
image: adminer
restart: always
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:adminer
volumes:
- "./0-upload_large_dumps.ini:\
/usr/local/etc/php/conf.d/0-upload_large_dumps.ini"
## a local file `0-upload_large_dumps.ini` on your host
## will override the container ini file

k8s: configmap mounted inside symbolic link to "..data" directory

Here my volumeMount:
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
Here my volumes:
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
items:
- key: interpreter-spec.yaml
path: interpreter-spec.yaml
Problem arises how volume is mounted. My volumeMount is mounted as:
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
Why it's mounting ..data directory to itself?
What can I say - this is almost not documented expected behavior. This is due to how secrets and configmaps are mounted into the running container.
When you mount a secret or configmap as volume, the path at which Kubernetes will mount it will contain the root level items symlinking the same names into a ..data directory, which is symlink to real mountpoint.
For example,
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
The real mountpoint (..2020_07_07_13_18_32.149716995 in the example above) will change each time the secret or configmap(in your case) is updated, so the real path of your interpreter-spec.yaml will change after each update.
What you can do is use subpath option in volumeMounts. By design, a container using secrets and configmaps as a subPath volume mount will not receive updates. You can leverage on this feature to singularly mount files. You will need to change the pod spec each time you add/remove any file to/from the secret / configmap, and a deployment rollout will be required to apply changes after each secret / configmap update.
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
subPath: interpreter-spec.yaml
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
I would also like to mention Kubernetes config map symlinks (..data/) : is there a way to avoid them? question here, where you may find additional info.

How to set default mode for secrets?

When secrets are created, they are 0755 owned by root:
/ # ls -al /var/run/secrets/
total 0
drwxr-xr-x 4 root root 52 Apr 16 21:56 .
drwxr-xr-x 1 root root 21 Apr 16 21:56 ..
drwxr-xr-x 3 root root 28 Apr 16 21:56 eks.amazonaws.com
drwxr-xr-x 3 root root 28 Apr 16 21:56 kubernetes.io
I want them to be 0700 instead. I know that for regular secret volumes I can use
- name: vol-sec-smtp
secret:
defaultMode: 0600
secretName: smtp
and it will mount (at least the secret files themselves) as 0600. Can I achieve the same with the secrets located at /var/run/secrets directly from the yaml file?
You can disable the default service account token mount and then mount it yourself as you showed.

Docker not recognizing Postgresql data directory

I am desperately trying to get a Docker project I have inherited up and running, and Docker is giving me no end of problems. When trying to start up my containers I get the following error on my Postgresql container:
FATAL: "/var/lib/postgresql/data" is not a valid data directory
DETAIL: File "/var/lib/postgresql/data/PG_VERSION" does not contain valid data.
HINT: You might need to initdb.
The project is a Rails project using Redis, ElasticSearch, and Sidekiq containers as well - those all load fine.
docker-compose.yml:
postgres:
image: postgres:9.6.2
environment:
POSTGRES_USER: $PG_USER
POSTGRES_PASSWORD: $PG_PASS
ports:
- '5432:5432'
volumes:
- postgres:/var/lib/postgresql/data
/var/lib/postgresql/data is owned by the postgres user (as it should be I believe) and the postgresql service starts up and runs fine on its own.
I have tried running initdb from the /usr/lib/postgresql/9.6/bin directory, as well as from docker (from docker it doesn't seem to persist or even create anything... if anyone knows why I would be interested in knowing)
The contents of the /var/lib/postgresql/data directory:
drwxrwxrwx 19 postgres postgres 4096 Jun 28 20:41 .
drwxr-xr-x 5 postgres postgres 4096 Jun 28 20:41 ..
drwx------ 5 postgres postgres 4096 Jun 28 20:41 base
drwx------ 2 postgres postgres 4096 Jun 28 20:41 global
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_clog
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_commit_ts
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_dynshmem
-rw------- 1 postgres postgres 4468 Jun 28 20:41 pg_hba.conf
-rw------- 1 postgres postgres 1636 Jun 28 20:41 pg_ident.conf
drwx------ 4 postgres postgres 4096 Jun 28 20:41 pg_logical
drwx------ 4 postgres postgres 4096 Jun 28 20:41 pg_multixact
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_notify
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_replslot
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_serial
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_snapshots
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_stat
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_stat_tmp
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_subtrans
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_tblspc
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_twophase
-rw------- 1 postgres postgres 4 Jun 28 20:41 PG_VERSION
drwx------ 3 postgres postgres 4096 Jun 28 20:41 pg_xlog
-rw------- 1 postgres postgres 88 Jun 28 20:41 postgresql.auto.conf
-rw------- 1 postgres postgres 22267 Jun 28 20:41 postgresql.conf
PG_VERSION contains 9.6
Any help is much appreciated.
you're changing default postgresql data path hence you need to initialize the database. try this
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
here is the init.sql file
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
I had the same issue, I restarted Docker daemon and even restarted the machine, but didn't fix the issue.
The path /var/lib/postgresql/data was not even on the FS.
Note: Performing a docker ps was not showing the postgresql container as running.
Solution (docker-compose):
Close postgres container:
docker-compose -f <path_to_my_docker_compose_file.yaml> down postgres
Start postgres container:
docker-compose -f <path_to_my_docker_compose_file.yaml> -d up postgres
-- That did the trick! --
Solution (docker):
docker stop <postgres_container>
docker start <postgres_container>
Another solution that you can try:
initdb <temporary_volum_folder>
Example:
initdb /tmp/postgres
docker-compose -f <path_to_my_docker_compose_file.yaml> -d up postgres
or
initdb /tmp/postgres
docker start <postgres_container>
Note:
In my case the postgres image is defined in docker-compose.yaml file, and it can be observed I don't define PG_DAT nor PG_VERSION and the container runs ok.
postgres:
image: postgres:9.6.14
container_name: postgres
environment:
POSTGRES_USER: 'pg12345678'
POSTGRES_PASSWORD: 'pg12345678'
ports:
- "5432:5432"
So when have postgres:/var/lib/postgresql/data, it's going to mount /var/lib/postgresql/data to a docker data volume called postgres. Docker data volumes are all stored together in a location that varies depending on the OS.
Try changing it to ./postgres to have it create a directory called postgres relative to your working directory.
Since the source is changing it will recreate the database, and I'd be willing to be fix the error your seeing. If not, it could be a permission issue on the host os.

How to map one single file into kubernetes pod using hostPath?

I have one own nginx configuration /home/ubuntu/workspace/web.conf generated by script. I prefer to have it under /etc/nginx/conf.d besides default.conf
Below is the nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: webconf
hostPath:
path: /home/ubuntu/workspace/web.conf
containers:
- image: nginx
name: nginx
ports:
- containerPort: 18001
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx/conf.d/web.conf
name: web
While it is mapped as folder only
$ kubectl create -f nginx.yaml
pod "nginx" created
$ kubectl exec -it nginx -- bash
root#nginx:/app# ls -al /etc/nginx/conf.d/
total 12
drwxr-xr-x 1 root root 4096 Aug 3 12:27 .
drwxr-xr-x 1 root root 4096 Aug 3 11:46 ..
-rw-r--r-- 2 root root 1093 Jul 11 13:06 default.conf
drwxr-xr-x 2 root root 0 Aug 3 11:46 web.conf
It works for docker container -v hostfile:containerfile.
How can I do this in kubernetes ?
BTW: I use minikube 0.21.0 on Ubuntu 16.04 LTS with kvm
Try using the subPath key on your volumeMounts like this:
apiVersion: v1
kind: Pod
metadata:
name: singlefile
spec:
containers:
- image: ubuntu
name: singlefiletest
command:
- /bin/bash
- -c
- ls -la /singlefile/ && cat /singlefile/hosts
volumeMounts:
- mountPath: /singlefile/hosts
name: etc
subPath: hosts
volumes:
- name: etc
hostPath:
path: /etc
Example:
$ kubectl apply -f singlefile.yaml
pod "singlefile" created
$ kubectl logs singlefile
total 24
drwxr-xr-x. 2 root root 4096 Aug 3 12:50 .
drwxr-xr-x. 1 root root 4096 Aug 3 12:50 ..
-rw-r--r--. 1 root root 1213 Apr 26 21:25 hosts
# /etc/hosts: Local Host Database
#
# This file describes a number of aliases-to-address mappings for the for
# local hosts that share this file.
...
Actually it is caused by kvm which is used by minikube.
path: /home/ubuntu/workspace/web.conf
If I login to minikube, it is folder in vm.
$ ls -al /home/ubuntu/workspace # in minikube host
total 12
drwxrwxr-x 2 ubuntu ubuntu 4096 Aug 3 12:11 .
drwxrwxr-x 5 ubuntu ubuntu 4096 Aug 3 19:28 ..
-rw-rw-r-- 1 ubuntu ubuntu 1184 Aug 3 12:11 web.conf
$ minikube ssh
$ ls -al /home/ubuntu/workspace # in minikube vm
total 0
drwxr-xr-x 3 root root 0 Aug 3 19:41 .
drwxr-xr-x 4 root root 0 Aug 3 19:41 ..
drwxr-xr-x 2 root root 0 Aug 3 19:41 web.conf
I don't know exactly why kvm host folder sharing behalf like this.
Therefore instead I use minikube mount command, see host_folder_mount.md, then it works as expected.