Connecting to GKE POD running Postgres with client Postico 2 - kubernetes

I want to connect to a Postgres instance that it is in a pod in GKE.
I think a way to achieve this can be with kubectl port forwarding.
In my local I have "Docker for desktop" and when I apply the yamls files I am able to connect to the database. The yamls I am using in GKE are almost identical
secrets.yaml
apiVersion: v1
kind: Secret
metadata:
namespace: staging
name: postgres-secrets
type: Opaque
data:
MYAPPAPI_DATABASE_NAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_USERNAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_PASSWORD: XXXENCODEDXXX
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: staging
name: db-data-pv
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/postgresql/data"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: staging
name: db-data-pvc
spec:
storageClassName: generic
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: staging
labels:
app: postgres-db
name: postgres-db
spec:
replicas: 1
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:12.4
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-db
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_PASSWORD
volumes:
- name: postgres-db
persistentVolumeClaim:
claimName: db-data-pvc
svc.yaml
apiVersion: v1
kind: Service
metadata:
namespace: staging
labels:
app: postgres-db
name: postgresdb-service
spec:
type: ClusterIP
selector:
app: postgres-db
ports:
- port: 5432
and it seems that everything is working
Then I execute kubectl port-forward postgres-db-podname 5433:5432 -n staging and when I try to connect it throws
FATAL: role "myappuserdb" does not exist
UPDATE 1
This is from GKE YAML
spec:
containers:
- env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_NAME
name: postgres-secrets
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_USERNAME
name: postgres-secrets
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_PASSWORD
name: postgres-secrets
UPDATE 2
I will explain what happened and how I solve this.
The first time I applied the files, kubectl apply -f k8s/, in the deployment, the environment variable POSTGRES_USER was referencing a wrong secret, MYAPPAPI_DATABASE_NAME and it should make reference to MYAPPAPI_DATABASE_USERNAME.
After this first time, everytime I did kubectl delete -f k8s/ the resources were deleted. However, when I created the resources again, the data that I created in the previous step was not cleaned.
I deleted the cluster and created a new cluster and everything worked. I need to check if there is a way to clean the data in kubernetes volume.

in your deployment's env spec you have assigned the wrong value for POSTGRES_USER. you have assigned the value POSTGRES_USER = MYAPPAPI_DATABASE_NAME.
but i think it should be POSTGRES_USER = MYAPPAPI_DATABASE_USERNAME .
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME #<<<this is the value need to change>>>
please try this one
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME

Related

When I attache secrets in deployment it doesn't create `secretObjects` for pods to get parameter

I am trying to create pods and attached ssmparamaters to these pods. And I create secret.yaml file for creating SecretProviderClass and secretObjects to attache pods these secret provider class and secret objects. Here is the file:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secrets
namespace: default
spec:
provider: aws
secretObjects:
- secretName: dbsecret
type: Opaque
data:
- objectName: dbusername
key: username
- objectName: dbpassword
key: password
parameters:
objects: |
- objectName: "secure-store"
objectType: "ssmparameter"
jmesPath:
- path: username
objectAlias: dbusername
- path: password
objectAlias: dbpassword
Also, I created a service account to attach deployment. Here is the file
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provider-user
namespace: default
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789078:role/test-oidc
Here is the deployment file where I tried to create env variables in order to get parameters from parameter store from secrets and attache pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: new-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: new-app
template:
metadata:
labels:
app: new-app
spec:
containers:
- name: new-app
image: nginx:1.14.2
resources:
requests:
memory: "300Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
ports:
- containerPort: 80
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
env:
- name: DB_USERNAME_01
valueFrom:
secretKeyRef:
name: dbsecret
key: username
- name: DB_PASSWORD_01
valueFrom:
secretKeyRef:
name: dbsecret
key: password
serviceAccountName: csi-provider-user
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aws-secrets"
But when I apply these files and create deployment I get this error:
Error: secret "dbsecret" not found
It doesn't create secret objects for some reason:
secretObjects:
- secretName: dbsecret
I might miss some configurations. Thanks for your help!

postgres on k8s with glusterfs as storage

I deploy a postgres database on k8s and glusterfs as volume.But every time I restart my pod all of data losses.Why is that?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
namespace: gitlab
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.1
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
env:
- name: POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_password
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_db
volumes:
- name: postgres
glusterfs:
endpoints: glusterfs-cluster
path: gv
Define PVC and PV objects. see below for reference.
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10GB
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GB
Then bind the PVC to the pod as shown below
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-pv-claim
volumes:
- name: postgres-pv-claim
persistentVolumeClaim:
claimName: postgres-pv-claim
As per the Kubernetes Documentation https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs, Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. I suggest to raise an issue at link https://github.com/kubernetes/kubernetes/issues/new/choose
If you want to install GitLab with PostgresSQL Backend, it will be easier to use below Helm Charts.
https://docs.gitlab.com/charts/
https://artifacthub.io/packages/helm/bitnami/postgresql
https://artifacthub.io/packages/helm/bitnami/postgresql-ha
You can do this:
1.For stateful set services such as databases, StatefulSet controllers should be used to deploy;
2.The storage data resources should be of a shared type, rather than using local volumes as storage, which may be scheduled to other nodes when creating POD objects;

Why am I getting tis OCI runtime error even though deployment is created

My Yaml file looks like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- name: mongodbport
containerPort: 27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
My secret yaml file
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: opaque
data:
mongo-root-username: JwB2AG8AbABoAGEAcgBkACcA
mongo-root-password: JwBEAGgAYQBuAHUAcwBoACcA
Error image:
Description of error could be found here
There is also a reference for DB credentials if you observe ,if that's needed to debug then I would love to provide. Thanks in advance !
Something is wrong with your secret. Are you trying to store binary value or null byte in your secret?
Please take a look: https://github.com/kubernetes/kubernetes/issues/89906
There are 2 issues with your current configuration. I've tested on my Minikube cluster.
Issue 1 is related with your secret.
When you will decode your secret you will find out that values ofmongo-root-username and mongo-root-password have '. You can verify it using command
$ echo JwB2AG8AbABoAGEAcgBkACcA | base64 --decode
'vo...rd'
$ echo JwBEAGgAYQBuAHUAcwBoACcA | base64 --decode
'Dh..sh`
In Kubernetes Secret Documentation under one of the Use cases you can find Note information about '.
Note:
Special characters such as $, , *, =, and ! will be interpreted by your shell and require escaping. In most shells, the easiest way to escape the password is to surround it with single quotes ('). For example, if your actual password is S!B*d$zDsb=, you should execute the command this way:
$ kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb='
but if you will encode it, you will see that password do not contains ' characters.
$ kubectl get secrets/dev-db-secret --template={{.data.password}} | base64 --decode
S!B\*d$zDsb=
Issue 2 is related with lack of any Volume where your Mongodb could save data.
$ kubectl logs mongodb-deployment-79d5b75846-jk9ss
...
Error saving history file: FileOpenFailed Unable to open() file /home/mongodb/.dbshell: No such file or directory
You have to provide some Volumes otherwise your pod will get error.
Solution
Change secrets mongo-root-username and mongo-root-passwordto values without '. You can do it using command:
$ kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=YourPassword
or manually with proper encoding:
$ echo devuser | base64
ZGV2dXNlcgo=
$ echo YourPassword | base64
WW91clBhc3N3b3JkCg==
While you are using Database images like MySQL or MongoDB you have to specify Volume to allow your database some read/write operations. Otherwise your container will stuck in CrashLoopBackOff loop.
Below my YAMLs which was tested on Minikube 1.16 and secret contains your values without '.
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: opaque
data:
mongo-root-username: dm9saGFyZAo=
mongo-root-password: RGhhbnVzaAo=
pvpvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
capacity:
storage: 1Gi
hostPath:
path: /data/mongopv/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mongo-claim
name: mongo-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
resources:
requests:
storage: 1Gi
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- image: mongo
name: mongodb
ports:
- name: mongodbport
containerPort: 27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
volumeMounts:
- mountPath: /data/db
name: mongo-claim
volumes:
- name: mongo-claim
persistentVolumeClaim:
claimName: mongo-claim
Just as additional information, you you would use more replicas you will need to provide new pv and pvc. It's good practice to use Statefulset with VolumeClaimTemplate for that.

2 pod has unbound immediate PersistentVolumeClaims - Kubernetes

I am trying to set PersistentVolumeClaims to my pods. Now the problem is when the deployment is success, the pods are in pending state. When I try to describe the pods, I get the error why they are not spinning up as below:
Warning FailedScheduling 20s (x3 over 22s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 pod has unbound immediate PersistentVolumeClaims.
This is the yaml for creating the persistent volume and refer it in the deployments
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
namespace: mongo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
namespace: mongo
labels:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data/db
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
namespace: mongo
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-url
key: database_url
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
---
.
.
.
I have removed the other yaml configurations from the above and keep the necessary ones only for easy-reading.
and when I try to see the status of the pvc using kubectl get pvc -n mongo I get the below pending status
my-pvc Pending 9m54s
Can someone tell me where I am doing wrong?
As described in answer to pod has unbound PersistentVolumeClaims, if you use a PersistentVolumeClaim you typically need a volume provisioner for Dynamic Volume Provisioning. The bigger cloud providers typically has this, and also Minikube has one that can be enabled.
Unless you have a volume provisioner in your cluster, you need to create a PersistentVolume resource and possibly also a StorageClass and declare how to use your storage system.
Configure a Pod to Use a PersistentVolume for Storage describes how to create a PersistentVolume with a hostPath that may be good for learning or development, but is typically not used in production by applications.

Pod status CrashLoopBackOff

I have s stateful set which status is showing CrashLoopBackOff. All other components are working fine. When I run kubectl -n magento get po I see pod status in CrashLoopBackOff, and logs show
Initializing database
2020-07-22T11:57:25.498116Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-07-22T11:57:25.499540Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2020-07-22T11:57:25.499578Z 0 [ERROR] Aborting
This is the Kubernetes manifest:
apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: db
k8s-app: magento
spec:
selector:
app: db
ports:
- name: db
port: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
namespace: magento
spec:
selector:
matchLabels:
app: db
serviceName: db
template:
metadata:
labels:
app: db
k8s-app: magento
spec:
containers:
- args:
- --max_allowed_packet=134217728
volumeMounts:
- mountPath: /var/lib/mysql
name: data
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: config
key: DB_NAME
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_PASS
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: config
key: DB_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_ROOT_PASS
image: percona:5.7
name: db
resources:
requests:
cpu: 100m
memory: 256Mi
restartPolicy: Always
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi