configmap being mounted as a folder instead of file when trying mount 2 files into a k8s pod - kubernetes

I have a problem when trying to mount 2 file into a pods.
Here's the Volumes part of the manifest file:
# Source: squid/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: squid-dev
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
replicas: 2
updateStrategy:
type: RollingUpdate
serviceName: squid-dev
selector:
matchLabels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
volumeClaimTemplates:
- metadata:
name: squid-dev-cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
template:
metadata:
annotations:
checksum: checksum
checksum/config: e51a4d6e552f890604aaa4c47c522653c25cad7ffec5680f67bbaadba6d3c3b2
checksum/secret: secret
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "squid"
containers:
- name: squid
image: "honestica/squid:4-ff434982-c47b-47c3-b705-b2adb2730978"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: squid-dev-config
mountPath: /etc/squid/squid.conf
subPath: squid.conf
- name: squid-dev-config
mountPath: /etc/squid/squid.conf.backup
subPath: squid.conf.backup
- name: squid-dev-cache
mountPath: /var/cache/squid
ports:
- name: port3128
containerPort: 3128
protocol: TCP
- name: port8080
containerPort: 8080
protocol: TCP
readinessProbe:
tcpSocket:
port: 3128
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
{}
volumes:
- name: squid-dev-config
configMap:
name: squid-dev
And this is manifest of the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: squid-dev-config
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
data:
squid.conf: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
squid.conf.backup: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
After using helm to install, I execute into pod and list the folder /etc/squid and result is below:
/ # ls -la /etc/squid/
total 388
drwxr-xr-x 1 root root 31 Mar 25 19:09 .
drwxr-xr-x 1 root root 19 Mar 25 19:09 ..
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf.default
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css.default
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf.default
-rw-r--r-- 1 root root 3598 Mar 25 19:09 squid.conf
drwxrwxrwx 2 root root 6 Mar 25 19:09 squid.conf.backup
-rw-r--r-- 1 root root 2526 Oct 30 23:43 squid.conf.default
-rw-r--r-- 1 root root 344566 Oct 30 23:43 squid.conf.documented
Why squid.conf is a file and squid.conf.backup is a folder? I have change the name of squid.conf.backup to anything else but it 's still create a folder instead of a file, and if we choose the name same as a file in this folder ex: cachemgr.conf
Warning Failed 3s (x3 over 15s) kubelet Error: failed to start container "squid": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/411dc966-ed7d-494c-b9b7-4abfe1639f00/volume-subpaths/squid-dev-config/squid/0" to rootfs at "/etc/squid/cachemgr.conf" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Only squid.conf can be mounted as a file, anything else will be mounted as a folder.
How can I fix this? Can anyone explain about this behavior please?
I have search on google and helm chart of fluent-bit can mount 2 files into pods using only 1 configmap: https://github.com/fluent/helm-charts/tree/main/charts/fluent-bit
Kubectl version:
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Helm version:
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}

Related

Microk8s Ingress returns 502

I'm new at Kubernetes and trying to do a simple project to connect MySQL and PhpMyAdmin using Kubernetes on my Ubuntu 20.04. I created the components needed and here is the components.
mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-configmap
key: mysql-database
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
phpmyadmin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
labels:
app: phpmyadmin
spec:
replicas: 1
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin
ports:
- containerPort: 3000
env:
- name: PMA_HOST
valueFrom:
configMapKeyRef:
name: mysql-configmap
key: database_url
- name: PMA_PORT
value: "3306"
- name: PMA_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-username
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-password
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-service
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 8080
targetPort: 3000
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: phpmyadmin-service
port:
number: 8080
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: phpmyadmin-service
port:
number: 8080
when I execute microk8s kubectl get ingress ingress-service, the output is:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service public test.com 127.0.0.1 80 45s
and when I tried to access test.com, that's when I got 502 error.
My kubectl version:
Client Version: v1.22.2-3+9ad9ee77396805
Server Version: v1.22.2-3+9ad9ee77396805
My microk8s' client and server version:
Client:
Version: v1.5.2
Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a
Go version: go1.15.15
Server:
Version: v1.5.2
Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a
UUID: b2bf55ad-6942-4824-99c8-c56e1dee5949
As for my microk8s' own version, I followed the installation instructions from here, so it should be 1.21/stable. (Couldn't find the way to check the exact version from the internet, if someone know how, please tell me how)
mysql.yaml logs:
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Initializing database files
2021-10-14T07:05:38.960693Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.26) initializing of server in progress as process 41
2021-10-14T07:05:38.967970Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:39.531763Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:40.591862Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:40.592247Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:40.670594Z 6 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Database files initialized
2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Starting temporary server
2021-10-14T07:05:45.362827Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 90
2021-10-14T07:05:45.486702Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:45.845971Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:46.022043Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:46.022189Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:46.023446Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-10-14T07:05:46.023728Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-10-14T07:05:46.026088Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-10-14T07:05:46.044967Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock
2021-10-14T07:05:46.045036Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL.
2021-10-14 07:05:46+00:00 [Note] [Entrypoint]: Temporary server started.
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating database testing-database
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating user testinguser
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Giving user testinguser access to schema testing-database
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Stopping temporary server
2021-10-14T07:05:48.422053Z 13 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.26).
2021-10-14T07:05:50.543822Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.26) MySQL Community Server - GPL.
2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: Temporary server stopped
2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
2021-10-14T07:05:51.711889Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 1
2021-10-14T07:05:51.725302Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:51.959356Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:52.162432Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:52.162568Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:52.163400Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-10-14T07:05:52.163556Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-10-14T07:05:52.165840Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-10-14T07:05:52.181516Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2021-10-14T07:05:52.181562Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
phpmyadmin.yaml logs:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message
[Thu Oct 14 03:57:32.653011 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.51 (Debian) PHP/7.4.24 configured -- resuming normal operations
[Thu Oct 14 03:57:32.653240 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
Here is also my Allocatable on describe nodes command:
Allocatable:
cpu: 4
ephemeral-storage: 113289380Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 5904508Ki
pods: 110
and the Allocated resources:
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (13%) 200m (5%)
memory 270Mi (4%) 370Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Any help? Thanks in advance.
Turns out it is a fudged up mistake of mine, where I specify the phpmyadmin's container port to be 3000, while the default image port opens at 80. After changing the containerPort and phpmyadmin-service's targetPort to 80, it opens the phpmyadmin's page.
So sorry for kkopczak and AndD for the fuss and also big thanks for trying to help! :)

http request always blocked in container of k8s cluster pod

Stages:
connect a container's shell
curl www.xxx.com (//this will always waiting )
...
Then I use tcpdump in host machine and filter by ip
tcpdump -i eth0 host ip
3 11:05:05 2019/12/2 133.5701630 10.171.162.231 111.111.222.333 TCP TCP: [Bad CheckSum]Flags=......S., SrcPort=48836, DstPort=HTTP(80), PayloadLen=0, Seq=126843476, Ack=0, Win=29200 ( Negotiating scale factor 0x7 ) = 29200
4 11:05:05 2019/12/2 133.5704230 111.111.222.333 10.171.162.231 TCP TCP:Flags=...A..S., SrcPort=HTTP(80), DstPort=48836, PayloadLen=0, Seq=3228156738, Ack=126843477, Win=2896 ( Negotiated scale factor 0x9 ) = 1482752
5 11:05:05 2019/12/2 133.5704630 10.171.162.231 111.111.222.333 TCP TCP: [Bad CheckSum]Flags=...A...., SrcPort=48836, DstPort=HTTP(80), PayloadLen=0, Seq=126843477, Ack=3228156739, Win=229 (scale factor 0x7) = 29312
6 11:05:05 2019/12/2 133.5705430 10.171.162.231 111.111.222.333 HTTP HTTP:Request, GET /api/test, Query:debug
7 11:05:05 2019/12/2 133.5707110 111.111.222.333 10.171.162.231 TCP TCP:Flags=...A...., SrcPort=HTTP(80), DstPort=48836, PayloadLen=0, Seq=3228156739, Ack=126843596, Win=6 (scale factor 0x9) = 3072
The tcp flag is
src -> dst syn
dst -> src syn/ack
src -> dst ack
src -> dst ack/push
dst -> src ack
The curl command will waiting a long time and then throw a timeout error. in normal request there has a dst -> src ack/push packet, but I never received.
I don't know why and how to resolve it.
--- my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-dep
labels:
app: test-app
version: stable
spec:
replicas: 2
selector:
matchLabels:
app: test-app
version: stable
template:
metadata:
labels:
app: test-app
version: stable
spec:
containers:
- image: test-app
name: test-app
livenessProbe:
httpGet:
path: /health/status
port: 80
initialDelaySeconds: 3
periodSeconds: 10
ports:
- containerPort: 80

How to map one single file into kubernetes pod using hostPath?

I have one own nginx configuration /home/ubuntu/workspace/web.conf generated by script. I prefer to have it under /etc/nginx/conf.d besides default.conf
Below is the nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: webconf
hostPath:
path: /home/ubuntu/workspace/web.conf
containers:
- image: nginx
name: nginx
ports:
- containerPort: 18001
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx/conf.d/web.conf
name: web
While it is mapped as folder only
$ kubectl create -f nginx.yaml
pod "nginx" created
$ kubectl exec -it nginx -- bash
root#nginx:/app# ls -al /etc/nginx/conf.d/
total 12
drwxr-xr-x 1 root root 4096 Aug 3 12:27 .
drwxr-xr-x 1 root root 4096 Aug 3 11:46 ..
-rw-r--r-- 2 root root 1093 Jul 11 13:06 default.conf
drwxr-xr-x 2 root root 0 Aug 3 11:46 web.conf
It works for docker container -v hostfile:containerfile.
How can I do this in kubernetes ?
BTW: I use minikube 0.21.0 on Ubuntu 16.04 LTS with kvm
Try using the subPath key on your volumeMounts like this:
apiVersion: v1
kind: Pod
metadata:
name: singlefile
spec:
containers:
- image: ubuntu
name: singlefiletest
command:
- /bin/bash
- -c
- ls -la /singlefile/ && cat /singlefile/hosts
volumeMounts:
- mountPath: /singlefile/hosts
name: etc
subPath: hosts
volumes:
- name: etc
hostPath:
path: /etc
Example:
$ kubectl apply -f singlefile.yaml
pod "singlefile" created
$ kubectl logs singlefile
total 24
drwxr-xr-x. 2 root root 4096 Aug 3 12:50 .
drwxr-xr-x. 1 root root 4096 Aug 3 12:50 ..
-rw-r--r--. 1 root root 1213 Apr 26 21:25 hosts
# /etc/hosts: Local Host Database
#
# This file describes a number of aliases-to-address mappings for the for
# local hosts that share this file.
...
Actually it is caused by kvm which is used by minikube.
path: /home/ubuntu/workspace/web.conf
If I login to minikube, it is folder in vm.
$ ls -al /home/ubuntu/workspace # in minikube host
total 12
drwxrwxr-x 2 ubuntu ubuntu 4096 Aug 3 12:11 .
drwxrwxr-x 5 ubuntu ubuntu 4096 Aug 3 19:28 ..
-rw-rw-r-- 1 ubuntu ubuntu 1184 Aug 3 12:11 web.conf
$ minikube ssh
$ ls -al /home/ubuntu/workspace # in minikube vm
total 0
drwxr-xr-x 3 root root 0 Aug 3 19:41 .
drwxr-xr-x 4 root root 0 Aug 3 19:41 ..
drwxr-xr-x 2 root root 0 Aug 3 19:41 web.conf
I don't know exactly why kvm host folder sharing behalf like this.
Therefore instead I use minikube mount command, see host_folder_mount.md, then it works as expected.

Mongodb container's data becomes "read-only" after restarting kubernetes, with glusterfs as storage?

My mongo is running as a docker container on the kubernetes, with glusterfs providing persistent volume. After I restart kuberntes (the machine power off and restart), all the mongo pods cannot come back, their logs:
chown: changing ownership of `/data/db/user_management.ns': Read-only file system
chown: changing ownership of `/data/db/storage.bson': Read-only file system
chown: changing ownership of `/data/db/local.ns': Read-only file system
chown: changing ownership of `/data/db/mongod.lock': Read-only file system
Here /data/db/ is the mounted gluster volume and I can make sure it's rw mode!:
# kubectl get pod mongoxxx -o yaml
apiVersion: v1
kind: Pod
spec:
containers:
- image: mongo:3.0.5
imagePullPolicy: IfNotPresent
name: mongo
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data/db
name: mongo-storage
volumes:
- name: mongo-storage
persistentVolumeClaim:
claimName: auth-mongo-data
# kubectl describe pod mongoxxx
...
Volume Mounts:
/data/db from mongo-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wdrfp (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mongo-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: auth-mongo-data
ReadOnly: false
...
# kubect get pv xxx
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
name: auth-mongo-data
resourceVersion: "215201"
selfLink: /api/v1/persistentvolumes/auth-mongo-data
uid: fb74a4b9-e0a3-11e6-b0d1-5254003b48ea
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 4Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: auth-mongo-data
namespace: default
glusterfs:
endpoints: glusterfs-cluster
path: infra-auth-mongo
persistentVolumeReclaimPolicy: Retain
status:
phase: Bound
And when I ls on the kubernetes node:
# ls -ls /var/lib/kubelet/pods/fc6c9ef3-e0a3-11e6-b0d1-5254003b48ea/volumes/kubernetes.io~glusterfs/auth-mongo-data/
total 163849
4 drwxr-xr-x. 2 mongo input 4096 1月 22 21:18 journal
65536 -rw-------. 1 mongo input 67108864 1月 22 21:16 local.0
16384 -rw-------. 1 mongo root 16777216 1月 23 17:15 local.ns
1 -rwxr-xr-x. 1 mongo root 2 1月 23 17:15 mongod.lock
1 -rw-r--r--. 1 mongo root 69 1月 23 17:15 storage.bson
4 drwxr-xr-x. 2 mongo input 4096 1月 22 21:18 _tmp
65536 -rw-------. 1 mongo input 67108864 1月 22 21:18 user_management.0
16384 -rw-------. 1 mongo root 16777216 1月 23 17:15 user_management.ns
I cannot chown though the volume is mounted as rw.
My host is CentOs 7.3:
Linux c4v160 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux.
I guess that it is because the glusterfs volume I have provided is unclean. The glusterfs volume infra-auth-mongo may consist of dirty directories. One solution is to remove this volume and create another.
Another solution is to hack mongodb dockerfile, force it change the ownership of /data/db before starting mongodb process. Like this: https://github.com/harryge00/mongo/commit/143bfc317e431692010f09b5c0d1f28395d2055b

redis cluster in Kubernetes doesn't write nodes.conf file

I'm trying to set up a Redis cluster and I followed this guide here: https://rancher.com/blog/2019/deploying-redis-cluster/
Basically I'm creating a StatefulSet with a replica 6, so that I can have 3 master nodes and 3 slave nodes.
After all the nodes are up, I create the cluster, and it all works fine... but if I look into the file "nodes.conf" (where the configuration of all the nodes should be saved) of each redis node, I can see it's empty.
This is a problem, because whenever a redis node gets restarted, it searches into that file for the configuration of the node to update the IP address of itself and MEET the other nodes, but he finds nothing, so it basically starts a new cluster on his own, with a new ID.
My storage is an NFS connected shared folder. The YAML responsible for the storage access is this one:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-provisioner-raid5
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner-raid5
spec:
serviceAccountName: nfs-provisioner-raid5
containers:
- name: nfs-provisioner-raid5
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-raid5-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: 'nfs.raid5'
- name: NFS_SERVER
value: 10.29.10.100
- name: NFS_PATH
value: /raid5
volumes:
- name: nfs-raid5-root
nfs:
server: 10.29.10.100
path: /raid5
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner-raid5
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs.raid5
provisioner: nfs.raid5
parameters:
archiveOnDelete: "false"
This is the YAML of the redis cluster StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5-alpine
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/fix-ip.sh", "redis-server", "/conf/redis.conf"]
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-cluster
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs.raid5
resources:
requests:
storage: 1Gi
This is the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
fix-ip.sh: |
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
echo "creating nodes"
if [ -f ${CLUSTER_CONFIG} ]; then
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
echo "done"
fi
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
and I created the cluster using the command:
kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
what am I doing wrong?
this is what I see into the /data folder:
the nodes.conf file shows 0 bytes.
Lastly, this is the log from the redis-cluster-0 pod:
creating nodes
1:C 07 Nov 2019 13:01:31.166 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 07 Nov 2019 13:01:31.166 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 07 Nov 2019 13:01:31.166 # Configuration loaded
1:M 07 Nov 2019 13:01:31.179 * No cluster configuration found, I'm e55801f9b5d52f4e599fe9dba5a0a1e8dde2cdcb
1:M 07 Nov 2019 13:01:31.182 * Running mode=cluster, port=6379.
1:M 07 Nov 2019 13:01:31.182 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 07 Nov 2019 13:01:31.182 # Server initialized
1:M 07 Nov 2019 13:01:31.182 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 07 Nov 2019 13:01:31.185 * Ready to accept connections
1:M 07 Nov 2019 13:08:04.264 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH
1:M 07 Nov 2019 13:08:04.306 # IP address for this node updated to 10.40.0.27
1:M 07 Nov 2019 13:08:09.216 # Cluster state changed: ok
1:M 07 Nov 2019 13:08:10.144 * Replica 10.44.0.14:6379 asks for synchronization
1:M 07 Nov 2019 13:08:10.144 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '27972faeb07fe922f1ab581cac0fe467c85c3efd', my replication IDs are '31944091ef93e3f7c004908e3ff3114fd733ea6a' and '0000000000000000000000000000000000000000')
1:M 07 Nov 2019 13:08:10.144 * Starting BGSAVE for SYNC with target: disk
1:M 07 Nov 2019 13:08:10.144 * Background saving started by pid 1041
1041:C 07 Nov 2019 13:08:10.161 * DB saved on disk
1041:C 07 Nov 2019 13:08:10.161 * RDB: 0 MB of memory used by copy-on-write
1:M 07 Nov 2019 13:08:10.233 * Background saving terminated with success
1:M 07 Nov 2019 13:08:10.243 * Synchronization with replica 10.44.0.14:6379 succeeded
thank you for the help.
Looks to be an issue with the shell script that is mounted from configmap. can you update as below
fix-ip.sh: |
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
echo "creating nodes"
if [ -f ${CLUSTER_CONFIG} ]; then
echo "[ INFO ]File:${CLUSTER_CONFIG} is Found"
else
touch $CLUSTER_CONFIG
fi
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
echo "done"
exec "$#"
I just deployed with the updated script and it worked. see below the output
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
redis-cluster-0 1/1 Running 0 83s
redis-cluster-1 1/1 Running 0 54s
redis-cluster-2 1/1 Running 0 45s
redis-cluster-3 1/1 Running 0 38s
redis-cluster-4 1/1 Running 0 31s
redis-cluster-5 1/1 Running 0 25s
master $ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl getpods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.40.0.4:6379 to 10.40.0.1:6379
Adding replica 10.40.0.5:6379 to 10.40.0.2:6379
Adding replica 10.40.0.6:6379 to 10.40.0.3:6379
M: 9984141f922bed94bfa3532ea5cce43682fa524c 10.40.0.1:6379
slots:[0-5460] (5461 slots) master
M: 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 10.40.0.2:6379
slots:[5461-10922] (5462 slots) master
M: 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a 10.40.0.3:6379
slots:[10923-16383] (5461 slots) master
S: 1bc8d1b8e2d05b870b902ccdf597c3eece7705df 10.40.0.4:6379
replicates 9984141f922bed94bfa3532ea5cce43682fa524c
S: 5b2b019ba8401d3a8c93a8133db0766b99aac850 10.40.0.5:6379
replicates 76ebee0dd19692c2b6d95f0a492d002cef1c6c17
S: d4b91700b2bb1a3f7327395c58b32bb4d3521887 10.40.0.6:6379
replicates 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 10.40.0.1:6379)
M: 9984141f922bed94bfa3532ea5cce43682fa524c 10.40.0.1:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a 10.40.0.3:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 1bc8d1b8e2d05b870b902ccdf597c3eece7705df 10.40.0.4:6379
slots: (0 slots) slave
replicates 9984141f922bed94bfa3532ea5cce43682fa524c
S: d4b91700b2bb1a3f7327395c58b32bb4d3521887 10.40.0.6:6379
slots: (0 slots) slave
replicates 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a
M: 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 10.40.0.2:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 5b2b019ba8401d3a8c93a8133db0766b99aac850 10.40.0.5:6379
slots: (0 slots) slave
replicates 76ebee0dd19692c2b6d95f0a492d002cef1c6c17
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
master $ kubectl exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:61
cluster_stats_messages_pong_sent:76
cluster_stats_messages_sent:137
cluster_stats_messages_ping_received:71
cluster_stats_messages_pong_received:61
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:137
master $ for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role;echo; done
redis-cluster-0
master
588
10.40.0.4
6379
588
redis-cluster-1
master
602
10.40.0.5
6379
602
redis-cluster-2
master
588
10.40.0.6
6379
588
redis-cluster-3
slave
10.40.0.1
6379
connected
602
redis-cluster-4
slave
10.40.0.2
6379
connected
602
redis-cluster-5
slave
10.40.0.3
6379
connected
588