How to upgrade postgresql inside a Kubernetes pod? - postgresql

I have a kubernetes cluster running an app. Part of the cluster is a postgresql pod, currently running version 10.4. Unfortunately, I discovered that I need to upgrade the postgresql version.
The postgres yaml is as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
The postgresql database already has some data in it. I need to find a way to upgrade the cluster while in production.
If I simply try to change the image to 12.0 and run kubectl apply I get an error:
2020-11-15 22:48:08.332 UTC [1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 12.5 (Debian 12.5-1.pgdg100+1).
So I understand that I need to manually upgrade the postgres database inside the cluster, and only then I will be able to fix the yaml. Is that correct?

I tried #Justin method, but I encountered an issue that I couldn't stop current running postgres process inside the pod (for some reason inside the container there is no access to postgresql service. You can see more about that issue here)
Since I couldn't upgrade the postgresql specifically inside the pod, what I did at the end is creating a parallel postgres pod in Kubernetes which holds the new version. Then I dumped database from old server, copied it to the new server, and used it to initialize the database there.
Here are the steps one by one:
Create a parallel postgres service with the new version
In old version pod:
pg_dumpall -U postgresadmin -h localhost -p 5432 > dumpall.sql
In the host:
kubectl cp postgres-old-pod:/dumpall.sql dumpall.sql
kubectl cp dumpall.sql postgres2-new-pod:/dumpall.sql
ssh to new-pod
extra step that I needed, becuase for some reason new pod didn't had 'postgres' user created:
get into postgres client using your credentials:
psql postgresql://postgresadmin:pass1234#127.0.0.1:5432/postgresdb?sslmode=disable
postgresdb=# CREATE ROLE postgres LOGIN SUPERUSER PASSWORD 'somepassword123';
then exit postgres and exit to normal user
Finally update the database:
psql -U postgres -W -f dumpall.sql

Thanks to #justadev for the answer. Some additions:
psql -U postgres -d keycloak -W -f dumpall.sql
I had to add the -d keycloak database flag because while the psql log was OK during the import, the data was missing in the database afterwards. You need to explicitly indicate the database for psql.
So, check the psql flags here: https://www.postgresql.org/docs/current/app-psql.html
By the way, I managed to upgrade from Postgres 11 to Posgres 14.5 this way.
Also, I want to add this:
tar may be absent on a pod, this means that kubectl cp will not work.
Here is the workaround:
Copy data from a pod to a local machine:
kubectl exec -n ${namespace} ${postgresql_pod} -- cat db_dump.sql > db_dump.sql
Copy data from a local machine to a pod:
cat db_dump.sql | kubectl exec -i -n ${namespace} ${postgresql_pod} "--" sh -c "cat > db_dump.sql"

Using this How to upgrade postgresql database from 10 to 12 without losing data for openproject as the basis for my post. I'm converting it to a container-with-volume friendly approach. I am assuming you're using the official Postgresql image on Docker Hub.
Backup the data - Out of scope for this answer. There are other people better suited to answering that question.
Upgrade postgres from inside the pod and migrate the data
Get a shell in your postgres pod
# insert your pod and namespace here
kubectl exec -it postgresl-shsdjkfshd -n default /bin/sh
Run the following inside the container
apt update
apt-get install postgresql-12 postgresql-server-dev-12
service postgresql stop
# Migrate the data
su postgres
/usr/lib/postgresql/12/bin/pg_upgrade \
--old-datadir=/var/lib/postgresql/10/main \
--new-datadir=/var/lib/postgresql/12/main \
--old-bindir=/usr/lib/postgresql/10/bin \
--new-bindir=/usr/lib/postgresql/12/bin \
--old-options '-c config_file=/etc/postgresql/10/main/postgresql.conf' \
--new-options '-c config_file=/etc/postgresql/12/main/postgresql.conf'
exit # exits the postgres user
The next bit is verbatim taken from the linked post:
Swap the ports the old and new postgres versions.
vim /etc/postgresql/12/main/postgresql.conf
#change port to 5432
vim /etc/postgresql/10/main/postgresql.conf
#change port to 5433
Start the postgresql service
service postgresql start
Log in as postgres user
su postgres
Check your new postgres version
psql -c "SELECT version();"
Run the generated new cluster script
./analyze_new_cluster.sh
Return as a normal(default user) user and cleanup up the old version's mess
apt-get remove postgresql-10 postgresql-server-dev-10
#uninstalls postgres packages
rm -rf /etc/postgresql/10/
#removes the old postgresql directory
su postgres
#login as postgres user
./delete_old_cluster.sh
#delete the old cluster data
Now change the deployment YAML image reference to the Postgres 12 and kubectl apply
Check the logs to see if it started up correctly.

Related

How can I perform incremental backup for MongoDB which is not replicaset( StandAlone) on Azure Kubernetes Services(AKS)?

I have setup mongodb on Kubernetes cluster on Azure(AKS), and I am not sure where will be the conf file for changes. I want to setup incremental backup for this deployment on AKS, its showing me error "2022-03-08T10:21:54.536+0000 namespace with DB local and collection oplog.rs does not exist".
Please help me with incremental backups on standalone Mongodb Server.
I am using mongodump command as - mongodump --host=xx.xx.xx.xx --port 27017 -d local -c oplog.rs -u username --authenticationDatabase admin -p password --out="/var/opt/baclup"
Is there any way without changing to Replicaset?
Also, how do I know where MongoDB is installed on the pod?
You can take the backup using different ways Manual, Port-forward, Cronjobs
Manual using command
Go inside the POD of mongodb and run the command
kubectl exec -it <mongodb-pod-name> -- mongodump --out ./mongodb/backup <Update your command as per need >
or
kubectl exec -it <mongodb-pod-name> -- mongodump --dbpath /data/db --out ./mongodb/backup
Or collection backup
kubectl exec -it <mongodb-pod-name> -- mongodump --collection MYCOLLECTION --db DB_NAME --out ./mongodb/backup
Port-forwarding
Proxy the port to local machine
kubectl port-forward svc/mongodb 27027
mongodump --collection MYCOLLECTION --db DB_NAME --out -u USERNAME -p ./mongodb-backup
Automated using cronjob
Cronjob will hit the MongoDb service save the backup and upload it to GCP bucket or S3 bucket you can configure as per need.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mongodb-backup
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: mongodb-backup
image: mongo:4.4.0-bionic
args:
- "/bin/sh"
- "-c"
- "/usr/bin/mongodump -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD -o /tmp/backup -h mongodb"
- "tar cvzf mongodb-backup.tar.gz /tmp/backup"
#- gsutil cp mongodb-backup.tar.gz gs://my-project/backups/mongodb-backup.tar.gz
envFrom:
- secretRef:
name: mongodb-secret
volumeMounts:
- name: mongodb-persistent-storage
mountPath: /data/db
restartPolicy: OnFailure
volumes:
- name: mongodb-persistent-storage
persistentVolumeClaim:
claimName: mongodb-pv-claim
Ref : https://www.cloudytuts.com/tutorials/kubernetes/how-to-backup-and-restore-mongodb-deployment-on-kubernetes/
Also, how do i know where MongoDB is installed on the pod ?
Is there any way without changing to Replicaset?
If you know Replica set thn you know the answer of above question.

How to increase the maximum connection limit of a Postgres docker container? [duplicate]

Problem
I have too many connection open using the default docker postgresql configuration
https://hub.docker.com/_/postgres/
Goal
I want to extend max_connection without using a volume for mounting the configuration (I need this to be available by default for my CI environment).
I have tried to use sed to edit the configuration but this has no effect.
What is the recommended way of overriding default configuration of postgresql docker official image?
run this docker-compose.yml
version: '2'
services:
postgres:
image: postgres:10.3-alpine
command: postgres -c 'max_connections=200'
environment:
POSTGRES_DB: pgdb
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
stdin_open: true
tty: true
ports:
- 5432:5432/tcp
It is as simple as (you just override the default CMD with postgres -N 500):
docker run -d --name db postgres:10 postgres -N 500
You can check it using:
docker run -it --rm --link db:db postgres psql -h db -U postgres
show max_connections;
max_connections
-----------------
500
(1 row)
The official image provides a way to run arbitrary SQL and shell scripts after the DB is initialized by putting them into the /docker-entrypoint-initdb.d/ directory. This script:
ALTER SYSTEM SET max_connections = 500;
will let us change the maximum connection limit. Note that the postgres server will be restarted after the initializing scripts are run, so even settings like max_connections that require a restart will go into effect when your container starts for the first time.
How you attach this script to the docker container depends on how you are starting it:
Docker
Save the SQL script to a file max_conns.sql, then use it as a volume:
docker run -it -v $PWD/max_conns.sql:/docker-entrypoint-initdb.d/max_conns.sql postgres
Docker Compose
With docker compose, save the SQL script to a file max_conns.sql next to your docker-compose.yaml, and than reference it:
version: '3'
services:
db:
image: postgres:latest
volumes:
- ./max_conns.sql:/docker-entrypoint-initdb.d/max_conns.sql
Kubernetes
With kubernetes, you will need to create a configmap for the script:
kind: ConfigMap
apiVersion: v1
metadata:
name: max-conns
data:
max_conns.sql: "ALTER SYSTEM SET max_connections = 500;"
And then use it with a container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-example
spec:
template:
spec:
containers:
- name: postgres
image: postgres:latest
volumeMounts:
- name: max-conns
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: max-conns
configMap:
name: max-conns
If you are using TestContainers do this before starting the container
postgreSQLContainer.setCommand("postgres", "-c", "max_connections=20000");
postgreSQLContainer.start();
From Google, Max value allowed 262143 min 1 and default 100
I spent a lot of time in this issue and the simplest way to resolve this is, you can add max_connections in your values.yaml file straight away. You can specify extended configuration parameters, this option will override the conf file.
For instance;
extendedConfiguration: "max_connections = 500"
You need to develop a init script. Accept max connections value from environment variable and update it during start up from init script. Finally launch postgreSQL service

Running the Postgres CLI client from a Kubernetes jumpbox

I have setup a Postgres pod on my Kubernetes cluster, and I am trying to troubleshoot it a bit.
I would like to use the official Postgres image and deploy it to my Kubernetes cluster using kubectl. Given that my Postgres server connection details are:
host: mypostgres
port: 5432
username: postgres
password: 12345
And given that I think the command will be something like:
kubectl run -i --tty --rm debug --image=postgres --restart=Never -- sh
What do I need to do so that I can deploy this image to my cluster, connect to my Postgres server and start running SQL command against it (for troubleshooting purposes)?
If your primarily interested in troubleshooting, then you're probably looking for the kubectl port-forward command, which will expose a container port on your local host. First, you'll need to deploy the Postgres pod; you haven't shown what your pod manifest looks like, so I'm going to assume a Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres
namespace: sandbox
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_PASSWORD
value: secret
- name: POSTGRES_USER
value: example
- name: POSTGRES_DB
value: example
image: docker.io/postgres:13
name: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-data
strategy: Recreate
volumes:
- emptyDir: {}
name: postgres-data
Once this is running, you can access postgres with the port-forward
command like this:
kubectl -n sandbox port-forward deploy/postgres 5432:5432
This should result in:
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
And now we can connect to Postgres using psql and run queries
against it:
$ psql -h localhost -U example example
psql (13.4)
Type "help" for help.
example=#
kubectl port-forward is only useful as a troubleshooting mechanism. If
you were trying to access your postgres pod from another pod, you
would create a Service and then use the service name as the hostname
for your client connections.
Update
If your goal is to deploy a client container so that you can log
into it and run psql, the easiest solution is just to kubectl rsh
into the postgres container itself. Assuming you were using the
Deployment shown earlier in this question, you could run:
kubectl rsh deploy/postgres
This would get you a shell prompt inside the postgres container. You
can run psql and not have to worry about authentication:
$ kubectl rsh deploy/postgres
$ psql -U example example
psql (13.4 (Debian 13.4-1.pgdg100+1))
Type "help" for help.
example=#
If you want to start up a separate container, you can use the kubectl debug command:
kubectl debug deploy/postgres
This gets you a root prompt in a debug pod. If you know the ip address
of the postgres pod, you can connect to it using psql. To get
the address of the pod, run this on your local host:
$ kubectl get pod/postgres-6df4c549f-p2892 -o jsonpath='{.status.podIP}'
10.130.0.11
And then inside the debug container:
root#postgres-debug:/# psql -h 10.130.0.11 -U example example
In this case you would have to provide an appropriate password,
because you are accessing postgres from "another machine", rather than
running directly inside the postgres pod.
Note that in the above answer I've used the shortcut
deploy/<deployment_name, which avoids having to know the name of the
pod created by the Deployment. You can replace that with
pod/<podname> in all cases.

How to use PersistentVolume for PostgreSQL data in Kubernetes

We are developing Web-server by Flask & DB-server by PostgreSQL in Kubernetes, and considering to use PersistentVolume in order to make data persistent.
However, for the directory specified as Volume, the ownership is forced to become ‘root’ user.
In PostgreSQL, if the user and owner do not match, the server can not be set up.
And, we can not set up a server under the user=‘root’.
So, we can not make PostgreSQL server data persistent.
Dockerfile
FROM ubuntu:latest
ARG project_dir=/app/
WORKDIR $project_dir
RUN apt update
RUN apt install --yes python3 python3-pip postgresql-9.5
RUN apt clean
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN pip install flask
RUN pip install flask_sqlalchemy
RUN pip install psycopg2
ADD app.py $project_dir
ADD templates/ $project_dir/templates/
USER postgres
RUN /etc/init.d/postgresql start && \
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" && \
createdb -O docker docker
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.5/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.5/main/postgresql.conf
EXPOSE 5000
CMD /usr/lib/postgresql/9.5/bin/postgres -D /var/lib/postgresql/9.5/main -c config_file=/etc/postgresql/9.5/main/postgresql.conf & python /app/app.py
development.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: dummyproject
labels:
app: dummyproject
spec:
replicas: 1
selector:
matchLabels:
app: dummyproject
template:
metadata:
labels:
app: dummyproject
spec:
containers:
- name: dummyproject
image: dummyproject:0.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
volumeMounts:
- mountPath: /var/lib/postgresql/
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: nfs-claim1
Please let me know if you know the solution.
Feel free to run PostgreSQL as root. Root in the container is not the same as root on a bare Linux machine. UID==0 doesn't imply superpowers anymore. Nowadays user access is controlled with the mechanism of capabilities, and your container won't have any dangerous capabilities by default (unless you explicitly ask Kubernetes for some).
You have 2 options here:
Set UID to 0 in the container, as #Alexandr Lurye told above. That is more or less secure now.
You can use InitContainer to change the owner. That is my answer how to do it - https://serverfault.com/questions/906083/how-to-mount-volume-with-specific-uid-in-kubernetes-pod/907160#907160

How to backup a Postgres database in Kubernetes on Google Cloud?

What is the best practice for backing up a Postgres database running on Google Cloud Container Engine?
My thought is working towards storing the backups in Google Cloud Storage, but I am unsure of how to connect the Disk/Pod to a Storage Bucket.
I am running Postgres in a Kubernetes cluster using the following configuration:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6.2-alpine
imagePullPolicy: IfNotPresent
env:
- name: PGDATA
value: /var/lib/postgresql/data
- name: POSTGRES_DB
value: my-database-name
- name: POSTGRES_PASSWORD
value: my-password
- name: POSTGRES_USER
value: my-database-user
name: postgres-container
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: my-postgres-volume
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: my-postgres-disk
name: my-postgres-volume
I have attempted to create a Job to run a backup:
apiVersion: batch/v1
kind: Job
metadata:
name: postgres-dump-job
spec:
template:
metadata:
labels:
app: postgres-dump
spec:
containers:
- command:
- pg_dump
- my-database-name
# `env` value matches `env` from previous configuration.
image: postgres:9.6.2-alpine
imagePullPolicy: IfNotPresent
name: my-postgres-dump-container
volumeMounts:
- mountPath: /var/lib/postgresql
name: my-postgres-volume
readOnly: true
restartPolicy: Never
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: my-postgres-disk
name: my-postgres-volume
(As far as I understand) this should run the pg_dump command and output the backup data to stdout (which should appear in the kubectl logs).
As an aside, when I inspect the Pods (with kubectl get pods), it shows the Pod never gets out of the "Pending" state, which I gather is due to there not being enough resources to start the Job.
Is it correct to run this process as a Job?
How do I connect the Job to Google Cloud Storage?
Or should I be doing something completely different?
I'm guessing it would be unwise to run pg_dump in the database Container (with kubectl exec) due to a performance hit, but maybe this is ok in a dev/staging server?
As #Marco Lamina said you can run pg_dump on postgres pod like
DUMP
// pod-name name of the postgres pod
// postgres-user database user that is able to access the database
// database-name name of the database
kubectl exec [pod-name] -- bash -c "pg_dump -U [postgres-user] [database-name]" > database.sql
RESTORE
// pod-name name of the postgres pod
// postgres-user database user that is able to access the database
// database-name name of the database
cat database.sql | kubectl exec -i [pod-name] -- psql -U [postgres-user] -d [database-name]
You can have a job pod that does run this command and exports this to a file storage system such as AWS s3.
I think running pg_dump as a job is a good idea, but connecting directly to your DB's persistent disk is not. Try having pg_dump connect to your DB over the network! You could then have a second disk onto which your pg_dump command dumps the backups. To be on the safe side, you can create regular snapshots of this second disk.
The reason for the Jobs POD to stay in Pending state is that it forever tries to attach/mount the GCE persistent disk and fails to do so because it is already attached/mounted to another POD.
Attaching a persistent disk to multiple PODs is only supported if all of them attach/mount the volume in ReadOnly mode. This is of course no viable solution for you.
I never worked with GCE, but it should be possible to easily create a snapshot from the PD from within GCE. This would not give a very clean backup, more like something in the state of "crashed in the middle, but recoverable", but this is probably acceptable for you.
Running pg_dump inside the database POD is a viable solution, with a few drawbacks as you already noticed, especially performance. You'd also have to move out the resulting backup from the POD afterwards, e.g. by using kubectl cp and another exec to cleanup the backup in the POD.
You can use Minio Client
First of all use simple dockerfile to make docker image contains postgres along with minio client (let name this image postgres_backup):
FROM postgres
RUN apt-get update && apt-get install -y wget
RUN wget https://dl.min.io/client/mc/release/linux-amd64/mc
RUN chmod +x mc
RUN ./mc alias set gcs https://storage.googleapis.com BKIKJAA5BMMU2RHO6IBB V8f1CwQqAcwo80UEIJEjc5gVQUSSx5ohQ9GSrr12
Now you can use postgres_backup image in your CronJob (I assumed you made backups bucket in your Google storage):
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup-job
spec:
# Backup the database every day at 2AM
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres_backup
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
command: ["/bin/sh"]
args: ["-c", 'pg_dump -Fc -U [Your Postgres Username] -W [Your Postgres Password] -h [Your Postgres Host] [Your Postgres Database] | ./mc pipe gcs/backups/$(date -Iseconds).dump']
restartPolicy: Never
A lot of tutorials use kubectl cp or transfer the file inside the pod, but you can also pipe the pg_dump container output directly to another process.
kubectl run --env=PGPASSWORD=$PASSWORD --image=bitnami/postgresql postgresql -it --rm -- \
bash -c "pg_dump -U $USER -h $HOST -d $DATABASE" |\
gzip > backup.sql.gz
The easiest way to dump without storing any additional copies on your pod:
kubectl -n [namespace] exec -it [pod name] -- bash -c "export PGPASSWORD='[db password]'; pg_dump -U [db user] [db name]" > [database].sql