SQL script isn't running Kubernetes, but runs fine using just Docker - postgresql

Have a pretty simple test.sql:
SELECT 'CREATE DATABASE test_dev'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'test_dev')\gexec
\c test_dev
CREATE TABLE IF NOT EXISTS test_table (
username varchar(255)
);
INSERT INTO test_table(username)
VALUES ('test name');
Doing the following does what I expected it to do:
Dockerfile.dev
FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
docker build -t testproj/postgres -f db/Dockerfile.dev .
docker run -p 5432:5432 testproj/postgres
This creates the database, switches to it, creates a table, and inserts the values.
Now I'm trying to do the same in Kubernetes with Skaffold, but nothing really seems to happen: no error messages, but nothing changed in postgres
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: init-script
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
- name: init-script
persistentVolumeClaim:
claimName: init-script
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
- name: init-script
mountPath: /docker-entrypoint-initdb.d
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
What I am doing wrong here?
Basically tried to follow the answers here, but isn't panning out. Sounded like I needed to move the .sql to a persistent volume.
https://stackoverflow.com/a/53069399/3123109

You don’t want to mount a volume over the entry point folder. You are basically masking the script in your image with an empty folder. Also you aren’t using your modified image, so it wouldn’t have your script in the first place.

I'm not 100% sure your image will work on Kubernetes.
I would recommend using something tested like PostgreSQL chart by bitnami, also it might be helpful to read Using Kubernetes to Deploy PostgreSQL.
If you want to use your own image inside the Kubernetes you need to push the image to your private Docker registry or repository. This is explained here Pull an Image from a Private Registry.
As for the test.sql you can store it into ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-sql
data:
test.sql: |
SELECT 'CREATE DATABASE test_dev'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'test_dev')\gexec
\c test_dev
CREATE TABLE IF NOT EXISTS test_table (
username varchar(255)
);
INSERT INTO test_table(username)
VALUES ('test name');
Which you can later mount as init.sql or execute after pod was created.

coderanger pointed out a glaring mistake that got me going in the right direction: I wasn't referring to the modified image.
I update accordingly:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: testproject/postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Then it started loading the data accordingly.

Related

Accessing Postgresql data of Kubernetes cluster

I have kubernetes cluster with two replicas of a PostgreSQL database in it, and I wanted to see the values stored in the database.
When I exec myself into one of the two postgres pod (kubectl exec --stdin --tty [postgres_pod] -- /bin/bash) and check the database from within, I have only a partial part of the DB. The rest of the DB data is on the other Postgres pod, and I don't see any directory created by the persistent volumes with all the database stored.
So in short I create 4 tables; in one postgres pod I have 4 tables but 2 are empty, in the other postgres pod there are 3 tables and the tables that were empty in the first pod, here are filled with data.
Why the pods don't have the same data in it?
How can I access and download the entire database?
PS. I deploy the cluster using HELM in minikube.
Here are the YAML files:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: database-pg
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /data/pgdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
nodePort: 30432
type: NodePort
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
volumeMounts:
- name: postgres-disk
mountPath: /data
# Config from ConfigMap
envFrom:
- configMapRef:
name: postgres-config
volumeClaimTemplates:
- metadata:
name: postgres-disk
spec:
accessModes: ["ReadWriteOnce"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
I found a solution to my problem of downloading the volume directory, however when I run multiple replicasets of postgres, the tables of the DB are still scattered between the pods.
Here's what I did to download the postgres volume:
First of all, minikube supports some specific directories for volume appear:
minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
So I've changed the mount path to be under the /data directory. This made the database volume visible.
After this I ssh'ed into minikube and copied the database volume to a new directory (I used /home/docker as the user of minikube is docker).
sudo cp -R /data/pgdata /home/docker
The volume pgdata was still owned by root (access denied error) so I changed it to be owned by docker. For this I also set a new password which I knew:
sudo passwd docker # change password for docker user
sudo chown -R docker: /home/docker/pgdata # change owner from root to docker
Then you can exit and copy the directory into you local machine:
exit
scp -r $(minikube ssh-key) docker#$(minikube ip):/home/docker/pgdata [your_local_path].
NOTE
Mario's advice, which is to use pgdump is probably a better solution to copy a database. I still wanted to download the volume directory to see if it has the full database, when the pods have only a part of all the tables. In the end it turned out it doesn't.

mount azure disk to azure Kubernetes for PostgreSQL pod

I want mount azure disk to azure Kubernetes for PostgreSQL pod. My yml files
postgres-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 80Gi
storageClassName: manual
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
azureDisk:
kind: Managed
diskName: es-us-dev-core-test
diskURI: /subscriptions/id/resourceGroups/kubernetes_resources_group/providers/Microsoft.Compute/disks/dev-test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 80Gi
postgres-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: dev-test
POSTGRES_USER: admintpost
POSTGRES_PASSWORD: ada3dassasa
StatefulSet.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
labels:
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
envFrom:
- configMapRef:
name: postgres-config
ports:
- containerPort: 5432
name: postgresdb
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: postgres-pv-claim
Instruction for create disk https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
I get an error that it cannot connect to the disk, could you please tell me how to add Azure Disk to the pod.Thanks.
Create Azure Disk in the correct resource group
Looking at the file postgres-storage.yaml:
in spec.azureDisk.diskURI I see that you have created the disk in the resource group kubernetes_resources_group. However, you should create the disk inside a resource group whose name is something like this:
MC_kubernetes_resources_group_<your cluster name>_<region of the cluster>
Make sure that you create the disk in the same availability zone as your cluster.
Set caching mode to None
In the file postgres-storage.yaml:
set spec.azureDisk.cachingMode to None
Fix the definition of StatefulSet.yml
If you're using Azure Disks then in the file StatefulSet.yml:
in spec.template.spec you should replace the following:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
with the this:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
EDIT: fixed some mistakes in the last part.

How to deploy MariaDB on kubernetes with some default schema and data?

For some context, I'm trying to build a staging / testing system on kubernetes which starts with deploying a mariadb on the cluster with some schema and data. I have a trunkated / clensed db dump from prod to help me with that. Let's call that file : dbdump.sql which is present in my local box in the path /home/rjosh/database/script/ . After much reasearch here is what my yaml file looks like:
apiVersion: v1
kind: PersistentVolume
metadata:
name: m3ma-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: m3ma-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
---
apiVersion: v1
kind: Service
metadata:
name: m3ma
spec:
ports:
- port: 3306
selector:
app: m3ma
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: m3ma
spec:
selector:
matchLabels:
app: m3ma
strategy:
type: Recreate
template:
metadata:
labels:
app: m3ma
spec:
containers:
- image: mariadb:10.2
name: m3ma
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: m3ma
volumeMounts:
- name: m3ma-persistent-storage
mountPath: /var/lib/mysql/
- name: m3ma-host-path
mountPath: /docker-entrypoint-initdb.d/
volumes:
- name: m3ma-persistent-storage
persistentVolumeClaim:
claimName: m3ma-pv-claim
- name: m3ma-host-path
hostPath:
path: /home/smaikap/database/script/
type: Directory
The MariaDB instance is coming up but not with the schema and data that is present in /home/rjosh/database/script/dbdump.sql.
Basically, the mount is not working. If I connect to the pod and check /docker-entrypoint-initdb.d/ there is nothing. How do I go about this?
A bit more details. Currently, I'm testing it on minikube. But, soon it will have to work on GKE cluster. Looking at the documentation, hostPath is not the choice for GKE. So, what the correct way of doing this?
Are you sure your home directory is visible to Kubernetes? Minikube generally creates a little VM to run things in, which wouldn't have your home dir in it. The more usual way to handle this would be to make a very small new Docker image yourself like:
FROM mariadb:10.2
COPY dbdump.sql /docker-entrypoint-initdb.d/
And then push it to a registry somewhere, and then use that image instead.

Kubernetes StatefulSet - does not resatore data on pod restart

Kubernetes version - 1.8
Created statefulset for postgres database with pvc
Added some tables to database
Restarted pod by scaling statefulset to 0 and then again 1
Created tables in step # 2 are no longer available
Tried another scnario with steps on docker-for-desktop cluster k8s version 1.10
Created statefulset for postgres database with pvc
Added some tables to database
Restarted docker for desktop
Created tables in step # 2 are no longer available
k8s manifest
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: kong
POSTGRES_USER: kong
POSTGRES_PASSWORD: kong
PGDATA: /var/lib/postgresql/data/pgdata
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/postgresql/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
app: postgres
spec:
ports:
- name: pgql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgres
---
apiVersion: apps/v1beta2 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pvc
---
If you have multiple nodes - the issue you see is totally expected. So if you want to use hostPath as a Persistent Volume in a multi-node cluster - you must use some shared filesystem like Glusterfs or Ceph and place your /mnt/postgresql/data folder onto that shared filesystem.

Kubernetes: Choose volume dependent on namespace

In a simple Postgres Deployment, I wish to choose the volume dependent on the namespace. The aim is to use the same Deployment configuration file to create Postgres deployments in different namespaces (e.g. production/staging).
What ways are there to achieve this?
Below my configuration file, I basically want to make MAKE_THIS_DEPENDENT_ON_NAMESPACE dependent on the environment (or namespace) this Deployment is used in.
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6
name: postgres
volumeMounts:
-name: postgres-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: MAKE_THIS_DEPENDENT_ON_NAMESPACE
You should try using a Persistent Volume Claim instead, PVCs are namespaced.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim