403 Forbidden error for kubernetes for drupal+postgresql - postgresql

Working in GCP to host drupal with PostgreSQL
Here are my resources:
I'm trying to enter into my external IP to view the portal but getting a 403 Forbidden error. like this:
I tried kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admincl --user=system:anonymous command to config role but its not working. I'm just learning these things as a student i do not understand how these things works.
Could anyone please help me also with a brief explanation how its not working and whats causing it?
Postgres-deploy file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/postgres"
subPath: "postgres"
name: postgres-data
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: USER_NAME
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: ROOT_PASSWORD
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: task-pv-claim

If this 403 error might be due to drupal installation , Please Remove the following lines from the .htaccess file and check whether the issue is resolved. If it still persists find this doc to resolve the 403 error
Remove the following lines from the .htaccess file :
Options -Indexes
Options FollowSymLinks-
403 Forbidden error with Drupal after install

Related

container level securityContext fsGroup

I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-

error when creating "deployment.yaml", Deployment in version "v1" cannot be handled as a Deployment

I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on AWS. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
apiVersion: v1
kind: Service
metadata:
name: ghost
labels:
app: ghost
spec:
ports:
- port: 80
selector:
app: ghost
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
containers:
- image: ghost:4-alpine
name: ghost
env:
- name: database_client
valueFrom:
secretKeyRef:
name: eks-keys
key: client
- name: database_connection_host
valueFrom:
secretKeyRef:
name: eks-keys
key: host
- name: database_connection_user
valueFrom:
secretKeyRef:tha
- name: database_connection_password
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcp
- name: database_connection_database
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcd
ports:
- containerPort: 2368
name: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
volumes:
- name: ghost-persistent-storage
persistentVolumeClaim:
claimName: efs-ghost
I ran this line on cmd in the folder container:
kubectl create -f deployment-ghost.yaml --validate=false
service/ghost created
Error from server (BadRequest): error when creating "deployment-ghost.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.ValueFrom: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|lueFrom":"secretKeyR|..., bigger context ...|},{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},{"name":"database_connection_pa|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},
Your spec has error:
...
- name: database_connection_user # <-- The error message points to this env variable
valueFrom:
secretKeyRef:
name: <secret name, eg. eks-keys>
key: <key in the secret>
...

My pod is always in 'ContainerCreating' state

When I ssh into minikube and pull the image from docker hub it pulls the image successfully:
$ docker pull mysql:5.7
So I understand network is not an issue.
But when I try deploying using the following command it goes into 'ContainerCreating' endlessly.
$ kubectl apply -f my-depl.yaml
#my-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-depl
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
Please let me know if there is anything wrong with the above yaml file or any other helpful debug tips that can help pull the image successfully from the Docker Hub.
I do not know the reason but my container got created automatically without any problems when I restarted the Minikube.
It would help if someone can add the reason behind this behavior.

chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted

Hi I have set up an small NFS server at home using my raspberry pi.
An I want to set that as the default storage for all of my kubernetes containers.
However I keep on getting this chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
here is my config.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pg-ss
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgredb
volumes:
- name: pv-data
nfs:
path: /mnt/infra-data/pg
server: 192.168.1.150
readOnly: false
I'm wondering what would be the cause of this. and how can i solve it.
Thanks,

Mount host dir for Postgres on Minikube - permissions issue

I'm trying to setup PostgreSQL on Minikube with data path being my host folder mounted on Minikube (I'd like to keep my data on host).
With the kubernetes object created (below) I get permission error, the same one as here How to solve permission trouble when running Postgresql from minikube? although the question mentioned doesn't answer the issue. It advises to mount minikube's VM dir instead.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: storage
env:
- name: POSTGRES_PASSWORD
value: user
- name: POSTGRES_USER
value: pass
- name: POSTGRES_DB
value: k8s
volumes:
- name: storage
hostPath:
path: /data/postgres
Is there any other way to do that other than building own image on top of Postgres and playing with the permissions somehow? I'm on macOS with Minikube 0.30.0 and I'm experiencing that with both Virtualbox and hyperkit drivers for Minikube.
Look at these lines from here : hostPath
the files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a hostPath volume
So, either you have to run as root or you have to change the file permission of /data/postgres directory.
However, you can run your Postgres container as root without rebuilding docker image.
You have to add following to your container:
securityContext:
runAsUser: 0
Your yaml should look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: storage
env:
- name: POSTGRES_PASSWORD
value: user
- name: POSTGRES_USER
value: pass
- name: POSTGRES_DB
value: k8s
securityContext:
runAsUser: 0
volumes:
- name: storage
hostPath:
path: /data/postgres