How to set default mode for secrets? - kubernetes

When secrets are created, they are 0755 owned by root:
/ # ls -al /var/run/secrets/
total 0
drwxr-xr-x 4 root root 52 Apr 16 21:56 .
drwxr-xr-x 1 root root 21 Apr 16 21:56 ..
drwxr-xr-x 3 root root 28 Apr 16 21:56 eks.amazonaws.com
drwxr-xr-x 3 root root 28 Apr 16 21:56 kubernetes.io
I want them to be 0700 instead. I know that for regular secret volumes I can use
- name: vol-sec-smtp
secret:
defaultMode: 0600
secretName: smtp
and it will mount (at least the secret files themselves) as 0600. Can I achieve the same with the secrets located at /var/run/secrets directly from the yaml file?

You can disable the default service account token mount and then mount it yourself as you showed.

Related

Weird permissions podman docker-compose volume

I have specified docker-compose.yml file with some volumes to mount. Here is example:
backend-move:
container_name: backend-move
environment:
APP_ENV: prod
image: backend-move:latest
logging:
options:
max-size: 250m
ports:
- 8080:8080
tty: true
volumes:
- php_static_logos:/app/public/images/logos
- ./volumes/nginx-php/robots.txt:/var/www/html/public/robots.txt
- ./volumes/backend/mysql:/app/mysql
- ./volumes/backend/httpd/welcome.conf:/etc/httpd/conf.d/welcome.conf
After I run podman-compose up -d and go to container through docker exec -it backend-move bash
I have this crazy permissions (??????????) on mounted files:
bash-4.4$ ls -la
ls: cannot access 'welcome.conf': Permission denied
total 28
drwxrwxrwx. 2 root root 114 Apr 21 12:29 .
drwxrwxrwx. 5 root root 105 Apr 21 12:29 ..
-rwxrwxrwx. 1 root root 400 Mar 21 17:33 README
-rwxrwxrwx. 1 root root 2926 Mar 21 17:33 autoindex.conf
-rwxrwxrwx. 1 root root 1517 Apr 21 12:29 php.conf
-rwxrwxrwx. 1 root root 8712 Apr 21 12:29 ssl.conf
-rwxrwxrwx. 1 root root 1252 Mar 21 17:27 userdir.conf
-?????????? ? ? ? ? ? welcome.conf
Any suggestions?
[root#45 /]# podman-compose --version
['podman', '--version', '']
using podman version: 3.4.2
podman-composer version 1.0.3
podman --version
podman version 3.4.2
facing the exact same issue, although on macos with the podman machine, since the parent dir has been mounted on the podman-machine, I do get the write permissions.
Although, on linux, it just fails as in your example.
To fix my issue, I had to add:
privileged: true

k8s: configmap mounted inside symbolic link to "..data" directory

Here my volumeMount:
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
Here my volumes:
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
items:
- key: interpreter-spec.yaml
path: interpreter-spec.yaml
Problem arises how volume is mounted. My volumeMount is mounted as:
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
Why it's mounting ..data directory to itself?
What can I say - this is almost not documented expected behavior. This is due to how secrets and configmaps are mounted into the running container.
When you mount a secret or configmap as volume, the path at which Kubernetes will mount it will contain the root level items symlinking the same names into a ..data directory, which is symlink to real mountpoint.
For example,
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
The real mountpoint (..2020_07_07_13_18_32.149716995 in the example above) will change each time the secret or configmap(in your case) is updated, so the real path of your interpreter-spec.yaml will change after each update.
What you can do is use subpath option in volumeMounts. By design, a container using secrets and configmaps as a subPath volume mount will not receive updates. You can leverage on this feature to singularly mount files. You will need to change the pod spec each time you add/remove any file to/from the secret / configmap, and a deployment rollout will be required to apply changes after each secret / configmap update.
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
subPath: interpreter-spec.yaml
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
I would also like to mention Kubernetes config map symlinks (..data/) : is there a way to avoid them? question here, where you may find additional info.

Kubernetes config map symlinks (..data/) : is there a way to avoid them?

I have noticed that when I create and mount a config map that contains some text files, the container will see those files as symlinks to ../data/myfile.txt .
For example, if my config map is named tc-configs and contains 2 xml files named stripe1.xml and stripe2.xml, if I mount this config map to /configs in my container, I will have, in my container :
bash-4.4# ls -al /configs/
total 12
drwxrwxrwx 3 root root 4096 Jun 4 14:47 .
drwxr-xr-x 1 root root 4096 Jun 4 14:47 ..
drwxr-xr-x 2 root root 4096 Jun 4 14:47 ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 31 Jun 4 14:47 ..data -> ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe1.xml -> ..data/stripe1.xml
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe2.xml -> ..data/stripe2.xml
I guess Kubernetes requires those symlinks and ../data and ..timestamp/ folders, but I know some applications that can fail to startup if they see non expected files or folders
Is there a way to tell Kubernetes not to generate all those symlinks and directly mount the files ?
I think this solution is satisfactory : specifying exact file path in mountPath, will get rid of the symlinks to ..data and ..2018_06_04_19_31_41.860238952
So if I apply such a manifest :
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html/users.xml
name: site-data
subPath: users.xml
volumes:
- name: site-data
configMap:
name: users
---
apiVersion: v1
kind: ConfigMap
metadata:
name: users
data:
users.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<users>
</users>
Apparently, I'm making use of subpath explicitly, and they're not part of the "auto update magic" from ConfigMaps, I won't see any more symlinks :
$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxr-xr-x 1 www-data www-data 4096 Jun 4 19:18 .
drwxr-xr-x 1 root root 4096 Jun 4 17:58 ..
-rw-r--r-- 1 root root 73 Jun 4 19:18 users.xml
Be careful to not forget subPath, otherwise users.xml will be a directory !
Back to my initial manifest :
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
volumes:
- name: site-data
configMap:
name: users
I'll see those symlinks coming back :
$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxrwxrwx 3 root root 4096 Jun 4 19:31 .
drwxr-xr-x 3 root root 4096 Jun 4 17:58 ..
drwxr-xr-x 2 root root 4096 Jun 4 19:31 ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 31 Jun 4 19:31 ..data -> ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 16 Jun 4 19:31 users.xml -> ..data/users.xml
Many thanks to psycotica0 on K8s Canada slack for putting me on the right track with subpath (they are quickly mentioned in configmap documentation)
I am afraid I don't know if you can tell Kubernetes not to generate those symlinks although I think that it is a native behaviour.
If having those files and links is an issue, a workaround that I can think of is to mount the configmap on one folder and copy the files over to another folder when you initialise the container:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', 'cp /configmap/* /configs']
volumeMounts:
- name: configmap
mountPath: /configmap
- name: config
mountPath: /configs
But you would have to declare two volumes, one for the configMap (configmap) and one for the final directory (config):
volumes:
- name: config
emptyDir: {}
- name: configmap
configMap:
name: myconfigmap
Change the type of volume for the config volume as you please obviously.

How to map one single file into kubernetes pod using hostPath?

I have one own nginx configuration /home/ubuntu/workspace/web.conf generated by script. I prefer to have it under /etc/nginx/conf.d besides default.conf
Below is the nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: webconf
hostPath:
path: /home/ubuntu/workspace/web.conf
containers:
- image: nginx
name: nginx
ports:
- containerPort: 18001
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx/conf.d/web.conf
name: web
While it is mapped as folder only
$ kubectl create -f nginx.yaml
pod "nginx" created
$ kubectl exec -it nginx -- bash
root#nginx:/app# ls -al /etc/nginx/conf.d/
total 12
drwxr-xr-x 1 root root 4096 Aug 3 12:27 .
drwxr-xr-x 1 root root 4096 Aug 3 11:46 ..
-rw-r--r-- 2 root root 1093 Jul 11 13:06 default.conf
drwxr-xr-x 2 root root 0 Aug 3 11:46 web.conf
It works for docker container -v hostfile:containerfile.
How can I do this in kubernetes ?
BTW: I use minikube 0.21.0 on Ubuntu 16.04 LTS with kvm
Try using the subPath key on your volumeMounts like this:
apiVersion: v1
kind: Pod
metadata:
name: singlefile
spec:
containers:
- image: ubuntu
name: singlefiletest
command:
- /bin/bash
- -c
- ls -la /singlefile/ && cat /singlefile/hosts
volumeMounts:
- mountPath: /singlefile/hosts
name: etc
subPath: hosts
volumes:
- name: etc
hostPath:
path: /etc
Example:
$ kubectl apply -f singlefile.yaml
pod "singlefile" created
$ kubectl logs singlefile
total 24
drwxr-xr-x. 2 root root 4096 Aug 3 12:50 .
drwxr-xr-x. 1 root root 4096 Aug 3 12:50 ..
-rw-r--r--. 1 root root 1213 Apr 26 21:25 hosts
# /etc/hosts: Local Host Database
#
# This file describes a number of aliases-to-address mappings for the for
# local hosts that share this file.
...
Actually it is caused by kvm which is used by minikube.
path: /home/ubuntu/workspace/web.conf
If I login to minikube, it is folder in vm.
$ ls -al /home/ubuntu/workspace # in minikube host
total 12
drwxrwxr-x 2 ubuntu ubuntu 4096 Aug 3 12:11 .
drwxrwxr-x 5 ubuntu ubuntu 4096 Aug 3 19:28 ..
-rw-rw-r-- 1 ubuntu ubuntu 1184 Aug 3 12:11 web.conf
$ minikube ssh
$ ls -al /home/ubuntu/workspace # in minikube vm
total 0
drwxr-xr-x 3 root root 0 Aug 3 19:41 .
drwxr-xr-x 4 root root 0 Aug 3 19:41 ..
drwxr-xr-x 2 root root 0 Aug 3 19:41 web.conf
I don't know exactly why kvm host folder sharing behalf like this.
Therefore instead I use minikube mount command, see host_folder_mount.md, then it works as expected.

Kubernetes 1.4 secret file permission not working

running K8s 1.4 with minikube on mac. I have the following in my replication controller yaml:
volumes:
- name: secret-volume
secret:
secretName: config-ssh-key-secret
items:
- key: "id_rsa"
path: ./id_rsa
mode: 0400
- key: "id_rsa.pub"
path: ./id_rsa.pub
- key: "known_hosts"
path: ./known_hosts
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: /root/.ssh
when I exec into a pod and check, I see the following:
~/.ssh # ls -ltr
lrwxrwxrwx 1 root root 18 Oct 6 17:01 known_hosts -> ..data/known_hosts
lrwxrwxrwx 1 root root 17 Oct 6 17:01 id_rsa.pub -> ..data/id_rsa.pub
lrwxrwxrwx 1 root root 13 Oct 6 17:01 id_rsa -> ..data/id_rsa
plus looking at the ~ level:
drwxrwxrwt 3 root root 140 Oct 6 17:01 .ssh
so the directory isn't read only and the file permissions seem to have been ignored (even the default 0644 doesn't seem to be working).
Am I doing something wrong or is this a bug?
The .ssh directory has links to the actual files. Following the link shows the actual files have the correct permissions (read only for id_rsa).
I validated the ssh setup would actually work by execing into a container generated from that replication controller and doing a git clone via ssh to a repo holding that key.