I'm working on a Dockerized project which has an Adminer container. I need to increase the value of post_max_size found in /usr/local/etc/php/conf.d/0-upload_large_dumps.ini.
My problem is any attempt to edit the file results in permission denied responses. Usually, this would be a prolem resolved by using sudo but I also get permission denied from that as well.
The following is the output of the directory I'm trying to edit showing the target file is owned by root:
/var/www/html $ cd /usr/local/etc/php/conf.d/
/usr/local/etc/php/conf.d $ ls -l
total 24
-rw-r--r-- 1 root root 113 Nov 18 22:10 0-upload_large_dumps.ini
-rw-r--r-- 1 root root 23 Nov 18 22:11 docker-php-ext-pdo_dblib.ini
-rw-r--r-- 1 root root 23 Nov 18 22:10 docker-php-ext-pdo_mysql.ini
-rw-r--r-- 1 root root 22 Nov 18 22:11 docker-php-ext-pdo_odbc.ini
-rw-r--r-- 1 root root 23 Nov 18 22:11 docker-php-ext-pdo_pgsql.ini
-rw-r--r-- 1 root root 17 Nov 18 17:03 docker-php-ext-sodium.ini
And the adminer section of docker-compose is as follows:
adminer:
image: adminer
restart: always
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:adminer
How can I edit docker-compose so I have permissions to update the files?
There is nothing to change in your docker-compose.yaml.
If you want to change it, you can just exec in the container as the root user.
So I suppose that, right now, you are doing
docker compose exec adminer ash
And then, you are trying to edit those file.
What you can do, instead, is:
docker compose exec --user root adminer ash
So you will be able to adapt those files owned by root.
This said, mind that the philosophy of Docker is that a container should be short lived, so you would be better having your own Dockerfile to edit that configuration file for good. Another way to do it would be to mount a file over the existing one to change the configurations.
Example of adaptation in a Dockerfile:
FROM adminer
COPY ./0-upload_large_dumps.ini \
/usr/local/etc/php/conf.d/0-upload_large_dumps.ini
## ^-- copies a files from the folder where you build
## in order to override the existing configuration
Then in your docker-compose.yml:
adminer:
build: .
image: your-own-namespace/adminer
restart: always
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:adminer
Example of mounting a file to override the configuration file:
adminer:
image: adminer
restart: always
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:adminer
volumes:
- "./0-upload_large_dumps.ini:\
/usr/local/etc/php/conf.d/0-upload_large_dumps.ini"
## a local file `0-upload_large_dumps.ini` on your host
## will override the container ini file
Related
I have specified docker-compose.yml file with some volumes to mount. Here is example:
backend-move:
container_name: backend-move
environment:
APP_ENV: prod
image: backend-move:latest
logging:
options:
max-size: 250m
ports:
- 8080:8080
tty: true
volumes:
- php_static_logos:/app/public/images/logos
- ./volumes/nginx-php/robots.txt:/var/www/html/public/robots.txt
- ./volumes/backend/mysql:/app/mysql
- ./volumes/backend/httpd/welcome.conf:/etc/httpd/conf.d/welcome.conf
After I run podman-compose up -d and go to container through docker exec -it backend-move bash
I have this crazy permissions (??????????) on mounted files:
bash-4.4$ ls -la
ls: cannot access 'welcome.conf': Permission denied
total 28
drwxrwxrwx. 2 root root 114 Apr 21 12:29 .
drwxrwxrwx. 5 root root 105 Apr 21 12:29 ..
-rwxrwxrwx. 1 root root 400 Mar 21 17:33 README
-rwxrwxrwx. 1 root root 2926 Mar 21 17:33 autoindex.conf
-rwxrwxrwx. 1 root root 1517 Apr 21 12:29 php.conf
-rwxrwxrwx. 1 root root 8712 Apr 21 12:29 ssl.conf
-rwxrwxrwx. 1 root root 1252 Mar 21 17:27 userdir.conf
-?????????? ? ? ? ? ? welcome.conf
Any suggestions?
[root#45 /]# podman-compose --version
['podman', '--version', '']
using podman version: 3.4.2
podman-composer version 1.0.3
podman --version
podman version 3.4.2
facing the exact same issue, although on macos with the podman machine, since the parent dir has been mounted on the podman-machine, I do get the write permissions.
Although, on linux, it just fails as in your example.
To fix my issue, I had to add:
privileged: true
Trying to sync between MongoDB and Elasticsearch with MongoDbJdbcDriver by following this answer. I use docker-compose for development, and this is how it looks like for logstash:
logstash:
image: logstash:7.9.1
container_name: logstash
volumes:
- ./logstash/jars/gson-2.8.6.jar:/usr/share/logstash/logstash-core/lib/jars/gson-2.8.6.jar:ro
- ./logstash/jars/mongojdbc2.3.jar:/usr/share/logstash/logstash-core/lib/jars/mongojdbc2.3.jar:ro
- ./logstash/jars/mongo-java-driver-3.12.6.jar:/usr/share/logstash/logstash-core/lib/jars/mongo-java-driver-3.12.6.jar:ro
- ./logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
- ./logstash/pipeline/mongo-to-elasticsearch.conf:/usr/share/logstash/pipeline/mongo-to-elasticsearch.conf
command: logstash
depends_on:
- elasticsearch
I run docker-compose up and it gives this error:
...
logstash | Error: unable to load mongojdbc2.3.jar from :jdbc_driver_library, file not readable (please check user and group permissions for the path)
logstash | Exception: LogStash::PluginLoadingError
...
I've checked file permission of mongojdbc2.3.jar on my machine, given read and write for me and docker group. However when I check inside logstash container, owner is not root but logstash:
// ls -l /usr/share/logstash/logstash-core/lib/jars
...
-rw-r--r-- 1 logstash logstash 2315317 Jul 24 15:13 mongo-java-driver-3.12.6.jar
-rw-r--r-- 1 logstash logstash 83500 Sep 8 01:30 mongojdbc2.3.jar
-rw-rw-r-- 1 logstash root 107210 Sep 1 23:32 org.eclipse.core.commands-3.6.0.jar
...
I tried changing ownership inside container but I couldn't do it since I don't have the sudo permission.
Please help, how I can I sync between MongoDB and Elasticsearch successfully?
Stack: MongoDB (v4.4.1), Logstash (v7.9.1), Docker (v19.03.12), Docker-compose (v1.27.3)
Try to create new image with jars in it
logstash:
build: ./logstash/
container_name: logstash
volumes:
- ./logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
- ./logstash/pipeline/mongo-to-elasticsearch.conf:/usr/share/logstash/pipeline/mongo-to-elasticsearch.conf
command: logstash
depends_on:
- elasticsearch
./logstash/Dockerfile
FROM docker.elastic.co/logstash/logstash:7.9.1
COPY ./jars/ /usr/share/logstash/logstash-core/lib/jars
Here my volumeMount:
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
Here my volumes:
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
items:
- key: interpreter-spec.yaml
path: interpreter-spec.yaml
Problem arises how volume is mounted. My volumeMount is mounted as:
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
Why it's mounting ..data directory to itself?
What can I say - this is almost not documented expected behavior. This is due to how secrets and configmaps are mounted into the running container.
When you mount a secret or configmap as volume, the path at which Kubernetes will mount it will contain the root level items symlinking the same names into a ..data directory, which is symlink to real mountpoint.
For example,
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
The real mountpoint (..2020_07_07_13_18_32.149716995 in the example above) will change each time the secret or configmap(in your case) is updated, so the real path of your interpreter-spec.yaml will change after each update.
What you can do is use subpath option in volumeMounts. By design, a container using secrets and configmaps as a subPath volume mount will not receive updates. You can leverage on this feature to singularly mount files. You will need to change the pod spec each time you add/remove any file to/from the secret / configmap, and a deployment rollout will be required to apply changes after each secret / configmap update.
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
subPath: interpreter-spec.yaml
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
I would also like to mention Kubernetes config map symlinks (..data/) : is there a way to avoid them? question here, where you may find additional info.
I have noticed that when I create and mount a config map that contains some text files, the container will see those files as symlinks to ../data/myfile.txt .
For example, if my config map is named tc-configs and contains 2 xml files named stripe1.xml and stripe2.xml, if I mount this config map to /configs in my container, I will have, in my container :
bash-4.4# ls -al /configs/
total 12
drwxrwxrwx 3 root root 4096 Jun 4 14:47 .
drwxr-xr-x 1 root root 4096 Jun 4 14:47 ..
drwxr-xr-x 2 root root 4096 Jun 4 14:47 ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 31 Jun 4 14:47 ..data -> ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe1.xml -> ..data/stripe1.xml
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe2.xml -> ..data/stripe2.xml
I guess Kubernetes requires those symlinks and ../data and ..timestamp/ folders, but I know some applications that can fail to startup if they see non expected files or folders
Is there a way to tell Kubernetes not to generate all those symlinks and directly mount the files ?
I think this solution is satisfactory : specifying exact file path in mountPath, will get rid of the symlinks to ..data and ..2018_06_04_19_31_41.860238952
So if I apply such a manifest :
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html/users.xml
name: site-data
subPath: users.xml
volumes:
- name: site-data
configMap:
name: users
---
apiVersion: v1
kind: ConfigMap
metadata:
name: users
data:
users.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<users>
</users>
Apparently, I'm making use of subpath explicitly, and they're not part of the "auto update magic" from ConfigMaps, I won't see any more symlinks :
$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxr-xr-x 1 www-data www-data 4096 Jun 4 19:18 .
drwxr-xr-x 1 root root 4096 Jun 4 17:58 ..
-rw-r--r-- 1 root root 73 Jun 4 19:18 users.xml
Be careful to not forget subPath, otherwise users.xml will be a directory !
Back to my initial manifest :
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
volumes:
- name: site-data
configMap:
name: users
I'll see those symlinks coming back :
$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxrwxrwx 3 root root 4096 Jun 4 19:31 .
drwxr-xr-x 3 root root 4096 Jun 4 17:58 ..
drwxr-xr-x 2 root root 4096 Jun 4 19:31 ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 31 Jun 4 19:31 ..data -> ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 16 Jun 4 19:31 users.xml -> ..data/users.xml
Many thanks to psycotica0 on K8s Canada slack for putting me on the right track with subpath (they are quickly mentioned in configmap documentation)
I am afraid I don't know if you can tell Kubernetes not to generate those symlinks although I think that it is a native behaviour.
If having those files and links is an issue, a workaround that I can think of is to mount the configmap on one folder and copy the files over to another folder when you initialise the container:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', 'cp /configmap/* /configs']
volumeMounts:
- name: configmap
mountPath: /configmap
- name: config
mountPath: /configs
But you would have to declare two volumes, one for the configMap (configmap) and one for the final directory (config):
volumes:
- name: config
emptyDir: {}
- name: configmap
configMap:
name: myconfigmap
Change the type of volume for the config volume as you please obviously.
running K8s 1.4 with minikube on mac. I have the following in my replication controller yaml:
volumes:
- name: secret-volume
secret:
secretName: config-ssh-key-secret
items:
- key: "id_rsa"
path: ./id_rsa
mode: 0400
- key: "id_rsa.pub"
path: ./id_rsa.pub
- key: "known_hosts"
path: ./known_hosts
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: /root/.ssh
when I exec into a pod and check, I see the following:
~/.ssh # ls -ltr
lrwxrwxrwx 1 root root 18 Oct 6 17:01 known_hosts -> ..data/known_hosts
lrwxrwxrwx 1 root root 17 Oct 6 17:01 id_rsa.pub -> ..data/id_rsa.pub
lrwxrwxrwx 1 root root 13 Oct 6 17:01 id_rsa -> ..data/id_rsa
plus looking at the ~ level:
drwxrwxrwt 3 root root 140 Oct 6 17:01 .ssh
so the directory isn't read only and the file permissions seem to have been ignored (even the default 0644 doesn't seem to be working).
Am I doing something wrong or is this a bug?
The .ssh directory has links to the actual files. Following the link shows the actual files have the correct permissions (read only for id_rsa).
I validated the ssh setup would actually work by execing into a container generated from that replication controller and doing a git clone via ssh to a repo holding that key.