How Deploy Drupal 7 using Kubernetes with a persistent store? - kubernetes

I'm trying to deploy Drupal 7 in Kubernetes, It fails with an error Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc on line 241.
Here is K8S deployment manifest:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-pvc
annotations:
pv.beta.kubernetes.io/gid: "drupal-gid"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: drupal
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drupal
name: drupal
spec:
replicas: 1
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-sites-volume
image: drupal:7.72
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:7.72
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-pvc
However, when I remove the volumeMounts from the drupal container, it works. I need to use volumes in order to persist the website data, can any one suggest a fix?
Update: I have also added the manifest for the persistence volume.

check if you could write to mounted volume.
# kubectl exec -it drupal-zxxx -- sh
$ ls -alhtr /var/www/html/modules
$ cd /var/www/html/modules
$ touch test.txt
because storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors.
alternatively you could try out an operator for drupal:
https://github.com/geerlingguy/drupal-operator
Also helm chart is another option:
https://bitnami.com/stack/drupal/helm

Related

How to use nginx as a sidecar container for IIS in Kubernetes?

I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page -
However when I try to go through the reverse proxy I see this partial result -
Here are the files:
nginx.conf:
events {
worker_connections 4096; ## Default: 1024
}
http{
server {
listen 81;
#Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server.
set $target "http://127.0.0.1:80/";
location / {
proxy_pass $target;
}
}
}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
...
name: ...
spec:
replicas: 1
selector:
matchLabels:
pod: ...
template:
metadata:
labels:
pod: ...
name: ...
spec:
containers:
- image: claudiubelu/nginx:1.15-1-windows-amd64-1809
name: nginx-reverse-proxy
volumeMounts:
- mountPath: "C:/usr/share/nginx/conf"
name: nginx-conf
imagePullPolicy: Always
- image: some-repo/proprietary-server-including-iis
name: ...
imagePullPolicy: Always
nodeSelector:
kubernetes.io/os: windows
imagePullSecrets:
- name: secret1
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/.
Busybox pod (used to access the PVC):
apiVersion: v1
kind: Pod
metadata:
name: nginx-busybox-pod
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "360000"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: nginx-conf
mountPath: "/mnt/nginx/conf"
restartPolicy: Always
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
nodeSelector:
kubernetes.io/os: linux
And lastly the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: azurefile
Any ideas why?

Can not create ReadWrite filesystem in kubernetes (ReadOnly mount)

Summary
I currently am in the process of learning kubernetes, as such I have decided to start with an application that is simple (Mumble).
Setup
My setup is simple, I have one node (the master) where I have removed the taint so mumble can be deployed on it. This single node is running CentOS Stream but SELinux is disabled.
The issue
The /srv/mumble directory appears to be ReadOnly, and at this point I have tried creating an init container to chown the directory but that fails due to the issue above. This issue appears in both containers, and I am unsure at this point how to change this to allow the mumble application to create files in said directory. The mumble application user runs as user 1000. What am I missing here?
Configs
---
apiVersion: v1
kind: Namespace
metadata:
name: mumble
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mumble-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
type: DirectoryOrCreate
path: "/var/lib/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mumble-pv-claim
namespace: mumble
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mumble-config
namespace: mumble
data:
murmur.ini: |
**cut for brevity**
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mumble-deployment
namespace: mumble
labels:
app: mumble
spec:
replicas: 1
selector:
matchLabels:
app: mumble
template:
metadata:
labels:
app: mumble
spec:
initContainers:
- name: storage-setup
image: busybox:latest
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
containers:
- name: mumble
image: phlak/mumble
ports:
- containerPort: 64738
env:
- name: TZ
value: "America/Denver"
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
volumes:
- name: mumble-pv-storage
persistentVolumeClaim:
claimName: mumble-pv-claim
- name: mumble-config
configMap:
name: mumble-config
items:
- key: murmur.ini
path: murmur.ini
---
apiVersion: v1
kind: Service
metadata:
name: mumble-service
spec:
selector:
app: mumble
ports:
- port: 64738
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
Not the volume that is mounted as read-only, the ConfigMap is always mounted as read-only. Change the command to:
command: ["sh", "-c", "chown 1000:1000 /srv/mumble"] will work.

k8s initContainer mountPath does not exist after kubectl pod deployment

Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc

GKE Cannot pull image, even though imagesPullSecret is defined

In Google Kubernetes Engine I created a POC cluster for our company which worked flawlessly. But now, when I try to create our production environment I cannot seem to get the imagesPullSecrets to work, it's the exact same credentials as in the POC, Same helm chart and the exact same regcred yaml file.
Yet i keep getting the classical:
Back-off pulling image "registry.company.co/frontend/company-web/upload": ImagePullBackOff
Pulling manually on the node works with the same credentials as those that i supplied in the imagesPullSecret
I've tried defining the imagesPullSecret both on a chart level and on the Service Account
I've verified the secret format and directly copied the credentials there when trying the manual pulls
GKE picks up regcred and shows it in the deployment
Regcred generated by kubectl create secret docker-registry regcred --docker-server="registry.company.co" --docker-username="gitlab" --docker-password="[PASSWORD]"
regcred secret
kind: Secret
apiVersion: v1
metadata:
name: regcred
namespace: default
data:
.dockerconfigjson: eyJhdXRocyI6eyJyZWdpc3RyeS5jb21wYW55LmNvIjp7InVzZXJuYW1lIjoiZ2l0bGFiIiwicGFzc3dvcmQiOiJbUkVEQUNURURdIiwiYXV0aCI6IloybDBiR0ZpT2x0QmJITnZJRkpsWkdGamRHVmtYUT09In19fQ==
type: kubernetes.io/dockerconfigjson
Service Account
kind: ServiceAccount
apiVersion: v1
metadata:
name: default
namespace: default
secrets:
- name: default-token-jktj5
imagePullSecrets:
- name: regcred
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:latest
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
initContainers:
- name: init-volume-perms
imagePullPolicy: Always
image: alpine
command: ["/bin/sh", "-c"]
args: ["mkdir /mnt/company-logos; mkdir /mnt/uploads; chown -R 1337:1337 /mnt"]
volumeMounts:
- mountPath: /mnt
name: mypvc
- name: company-web-uploads
image: registry.company.co/frontend/company-web/uploads
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/web/uploads
subPath: uploads
name: mypvc
- name: company-logos
image: registry.company.co/backend/pdf-service/company-logos
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/shared/company-logos
subPath: company-logos
name: mypvc
volumes:
- name: mypvc
gcePersistentDisk:
pdName: gke-nfs-disk
fsType: ext4
I've looked around, following different guides from the ground up to no success.
So I'm at a total loss as to what to do.
Default namespace all around
It may be because of namespace issue. Can you verify a few things
Are you using default namespace at both places?
K8S version difference between poc and prod.
Can you recreate working secret by something like kubectl get secret default-token-jktj5 -o yaml > imagepullsecret.yaml. Edit the yaml file to remove revision and other status information. Apply the same to prod
I have seen this issue in GKE because of multiline secret conversion to base64. Ensure secrets are matching between environments.

Kubernetes: Choose volume dependent on namespace

In a simple Postgres Deployment, I wish to choose the volume dependent on the namespace. The aim is to use the same Deployment configuration file to create Postgres deployments in different namespaces (e.g. production/staging).
What ways are there to achieve this?
Below my configuration file, I basically want to make MAKE_THIS_DEPENDENT_ON_NAMESPACE dependent on the environment (or namespace) this Deployment is used in.
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6
name: postgres
volumeMounts:
-name: postgres-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: MAKE_THIS_DEPENDENT_ON_NAMESPACE
You should try using a Persistent Volume Claim instead, PVCs are namespaced.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim