How to persist data using a postgres database, Docker, and Kubernetes? - postgresql

I am trying to mount a persistent disk on my container which runs a Postgres custom image. I am using Kubernetes and following this tutorial.
This is my db_pod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: lp-db
labels:
name: lp-db
spec:
containers:
- image: my_username/my-db
name: my-db
ports:
- containerPort: 5432
name: my-db
volumeMounts:
- name: pg-data
mountPath: /var/lib/postgresql/data
volumes:
- name: pg-data
gcePersistentDisk:
pdName: my-db-disk
fsType: ext4
I create the disk using the command gcloud compute disks create --size 200GB my-db-disk.
However, when I run the pod, delete it, and then run it again (like in the tutorial) my data is not persisted.
I tried multiple versions of this file, including with PersistentVolumes and PersistentVolumeClaims, I tried changing the mountPath, but to no success.
Edit
Dockerfile for creating the Postgres image:
FROM ubuntu:trusty
RUN rm /bin/sh && \
ln -s /bin/bash /bin/sh
# Get Postgres
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main" >> /etc/apt/sources.list.d/pgdg.list
RUN apt-get update && \
apt-get install -y wget
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
# Install virtualenv (will be needed later)
RUN apt-get update && \
apt-get install -y \
libjpeg-dev \
libpq-dev \
postgresql-9.4 \
python-dev \
python-pip \
python-virtualenv \
strace \
supervisor
# Grab gosu for easy step-down from root
RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& apt-get purge -y --auto-remove ca-certificates wget
# make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
# Adjust PostgreSQL configuration so that remote connections to the database are possible.
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.4/main/pg_hba.conf
# And add ``listen_addresses`` to ``/etc/postgresql/9.4/main/postgresql.conf``
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.4/main/postgresql.conf
RUN echo "log_directory='/var/log/postgresql'" >> /etc/postgresql/9.4/main/postgresql.conf
# Add all code from the project and all config files
WORKDIR /home/projects/my-project
COPY . .
# Add VOLUMEs to allow backup of config, logs and databases
ENV PGDATA /var/lib/postgresql/data
VOLUME /var/lib/postgresql/data
# Expose an entrypoint and a port
RUN chmod +x scripts/sh/*
EXPOSE 5432
ENTRYPOINT ["scripts/sh/entrypoint-postgres.sh"]
And entrypoint script:
echo " I am " && gosu postgres whoami
gosu postgres /etc/init.d/postgresql start && echo 'Started postgres'
gosu postgres psql --command "CREATE USER myuser WITH SUPERUSER PASSWORD 'mypassword';" && echo 'Created user'
gosu postgres createdb -O myuser mydb && echo 'Created db'
# This just keeps the container alive.
tail -F /var/log/postgresql/postgresql-9.4-main.log

In the end, it seems that the real problem was the fact that I was trying to create the database from my entrypoint script.
Things such as creating a db or a user should be done at container creation time so I ended up using the standard Postgres image, which actually provides a simple and easy way to create an user and a db.
This is the fully functional configuration file for Postgres.
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: myuser
- name: POSTGRES_PASSWORD
value: mypassword
- name: POSTGRES_DB
value: mydb
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pg-data
volumes:
- name: pg-data
persistentVolumeClaim:
claimName: pg-data-claim
Thanks to all those who helped me :)

does your custom postgresql persist data at /var/lib/postgresql/data?
are you able to get logs from your postgresql container and spot anything interesting?
when your pod is running, can you see the mountpoints inside your container and check the persistent disk is there?

I followed this scenario and I was able to persist my data by changing the mountPath to /var/lib/postgresql and also reproduced using cassandra (i.e. /var/lib/cassandra for mountPath)
I was able to delete/restart pods from different nodes/hosts and still see my "users" table and the data I previously entered. However, I was not using a custom image, I just used standard docker images.

Related

Connection refused inside kubernetes cron jobs using snx vpn and paramiko sftp

I run a python script to download file via sftp using vpn snx vpn and sftp paramiko. I invoke the script via cronjobs,
Here are my cronjobs script:
apiVersion: batch/v1
kind: CronJob
metadata:
name: file-uploader-a
labels:
app: file-uploader
spec:
schedule: "*/1 0-10 * * *"
jobTemplate:
spec:
parallelism: 1 # How many pods will be instantiated at once.
completions: 1 # How many containers of the job are instantiated one after the other (sequentially) inside the pod.
backoffLimit: 5 # Maximum pod restarts in case of failure
template:
spec:
containers:
- name: file-uploader-a
image: image-a
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: file-env
- secretRef:
name: file-secret
securityContext:
capabilities:
add:
- CAP_NET_ADMIN
- CAP_SYS_MODULE
command:
- sh
- "-c"
- ". /root/.venv/bin/activate && python -m python.module.a"
restartPolicy: OnFailure
terminationGracePeriodSeconds: 8
My Docker file
FROM ubuntu:18.04
ADD scripts/snx_install_800010013.sh /root
ADD scripts/SINAR33-exp-13May2022.pfx /root
ADD scripts/post_install.sh /root
ADD scripts/init_snx.sh /root
ADD requirements.txt /root
RUN cd root && mkdir bss_uploader
RUN cd root/bss_uploader && mkdir temp
ADD bss_uploader /root/bss_uploader
ARG SNX_SERVER
ARG FTP_HOST
ARG DEBIAN_FRONTEND=noninteractive
RUN dpkg --add-architecture i386 && apt-get update && \
apt-get install bzip2 kmod libstdc++5:i386 \
libpam0g:i386 libx11-6:i386 expect iptables \
net-tools iputils-ping iproute2 python3-venv \
linux-modules-5.4.0-1063-aws python3-pip \
software-properties-common tmux openssh-client -y
RUN cd /usr/bin && ln -s python3 python
WORKDIR /root
RUN bash -x snx_install_800010013.sh
RUN bash -x post_install.sh $SNX_SERVER $FTP_HOST
post_install.sh script
#!/bin/bash
SNX_SERVER=$1
FTP_HOST=$2
mkdir ~/.ssh && touch ~/.ssh/config
echo -e "Host $FTP_HOST\n\tStrictHostKeyChecking no\n\nHost $SNX_SERVER\n\tStrictHostKeyChecking no" >> ~/.ssh/config
chmod 644 ~/.ssh/config
uname=$(uname -r)
mkdir /lib/modules/$uname
# move kernel modules installed to current
cp -a /lib/modules/5.4.0-1063-aws/. /lib/modules/$uname/
modprobe tun
python -m venv .venv
. .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt
init_snx.sh script to init on first run
#!/bin/bash
iptables -t nat -A POSTROUTING -o tunsnx -j MASQUERADE
iptables -A FORWARD -i eth0 -j ACCEPT
SNX_SERVER=$1
SNX_PASSWORD=$2
SNX_COMMAND="snx -s $SNX_SERVER -c /root/SINAR33-exp-13May2022.pfx -g"
/usr/bin/expect <<EOF
spawn $SNX_COMMAND
expect "*?assword:"
send "$SNX_PASSWORD\r"
expect "*Do you accept*"
send "y\r"
expect "SNX - connected."
spawn sleep 4
expect "Waiting up to*"
spawn snx -d
expect "SNX - Disconnecting*"
spawn sleep 2
expect "Waiting up to*"
EOF
When I try to run the script via CronJobs, I got connection refused error while connecting to SFTP.
But when i try to run manualy from docker-container (via cli docker container) i got succeed
docker run --name xt_up --cap-add=ALL -t -d image:latest
I already tried to add networkPolicies.egress but still got no luck
could you please help me regarding this ?
thank you and sorry for my bad english

User permission is denied with chown using Dockerfile on docker-compose

I'm trying to mount a volume in docker-compose. But it seems my user does not have permission to use volume. :/
My dockerfile is:
FROM openjdk:8u181-jdk-slim
ENV HOME /app
ENV CONFIG_PATH $HOME/config
ENV DATA_PATH $HOME/data
ENV LOG_PATH $HOME/log
RUN addgroup --gid 1001 myuser \
&& adduser --uid 1001 --gid 1001 --home $HOME --shell /bin/bash \
--gecos "" --no-create-home --disabled-password myuser \
&& mkdir -p $CONFIG_PATH $DATA_PATH $LOG_PATH \
&& chown -R myuser:myuser $HOME \
&& chmod -R g=u $HOME \
&& chmod +x $HOME/*
RUN apt-get update \
&& apt-get install -y curl \
&& apt-get clean
VOLUME $CONFIG_PATH $DATA_PATH $LOG_PATH
USER myuser:myuser
EXPOSE 7777
EXPOSE 8080
HEALTHCHECK --interval=1m --timeout=10s --start-period=2m \
CMD curl -f http://localhost:7777/health || exit 1
COPY --chown=myuser my-service-*.jar $HOME/my-service.jar
ENTRYPOINT ["/bin/bash", "-c", "java $JAVA_OPTS -jar $HOME/my-service.jar $0 $#"]
my docker-compose file is:
volumes:
my-service_stream:
my-service:
image: my-service-image
networks:
- internal
env_file:
- config/common.env
volumes:
- my-service_stream:/app/data/state
not able to use myuser and not able to mount for this user :/. I can not use and myuser does not have permission to write that volume.
I have tried adding user to my docker-compose file as
user: "1001:1001"
but nothing is changed.

how to mount a path as non root user in kubernetes

I deployed a mysql monitor application image in kubernetes cluster which run as non root user. When I tried to mount a path to make the data persistent,its overriding the directory(creates a new directory by deleting everything inside that path) in which my application configuration files has to be present.Even I tried using init container still,i am not able to mount it.
my docker file:
FROM centos:7
ENV DIR /binaries
ENV PASS admin
WORKDIR ${DIR}
COPY libstdc++-4.8.5-39.el7.x86_64.rpm ${DIR}
COPY numactl-libs-2.0.12-3.el7.x86_64.rpm ${DIR}
COPY mysqlmonitor-8.0.18.1217-linux-x86_64-installer.bin ${DIR}
RUN yum install -y libaio && yum -y install gcc && yum -y install gcc-c++ && yum -y install compat-libstdc++-33 && yum -y install libstdc++-devel && yum -y install elfutils-libelf-devel && yum -y install glibc-devel && yum -y install libaio-devel && yum -y install sysstat
RUN yum install -y gcc && yum install -y make && yum install -y apr-devel && yum install -y openssl-devel && yum install -y java
RUN rpm -ivh numactl-libs-2.0.12-3.el7.x86_64.rpm
RUN useradd sql
RUN chown sql ${DIR}
RUN chmod 777 ${DIR}
RUN chmod 755 /home/sql
USER sql
WORKDIR ${DIR}
RUN ./mysqlmonitor-8.0.18.1217-linux-x86_64-installer.bin --installdir /home/sql/mysql/enterprise/monitor --mode unattended --tomcatport 18080 --tomcatsslport 18443 --adminpassword ### --dbport 13306
RUN rm -rf /binaries/*
VOLUME /home/mysql/mysql/enterprise/monitor/mysql/data
ENTRYPOINT ["/bin/bash", "-c", "/home/sql/mysql/enterprise/monitor/mysqlmonitorctl.sh start && tail -f /home/sql/mysql/enterprise/monitor/apache-tomcat/logs/mysql-monitor.log"]
my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mem
template:
metadata:
labels:
app: mem
spec:
containers:
- name: mem
image: 22071997/mem
command:
volumeMounts:
- mountPath: /home/sql/mysql/enterprise/monitor/mysql/data
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: mem-pvc1
initContainers:
- name: permissionsfix
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- chown -R 1000:1000 /home/sql/mysql/enterprise/monitor/ && chmod -R 777 /home/sql/mysql/enterprise/monitor/ ;
volumeMounts:
- name: volume
mountPath: /home/sql/mysql/enterprise/monitor
output:
[sql#mypod-775764db45-bzs8n enterprise]$ cd monitor/mysql
[sql#mypod-775764db45-bzs8n mysql]$ ls
LICENSE LICENSE.router README.meb bin docs lib my-large.cnf my-small.cnf new runtime support-files var
LICENSE.meb README README.router data include man my-medium.cnf my.cnf run share tmp
[sql#mypod-775764db45-bzs8n mysql]$ cd data
[sql#mypod-775764db45-bzs8n data]$ ls
mypod-775764db45-bzs8n.err
This doesn't seem related to mounting as a non-root user, but more so that mounting a volume over an existing directory will result in that directory looking as if it is empty (or containing whatever happens to be on the volume already). If you have configuration stored on a non-volume that you would like to be on the volume, then you will need to mount the volume to a different location (so it doesn't overwrite your local configuration) and copy that configuration to the mounted volume location. You can do this in an init container, but be careful not to overwrite the volume contents on every startup of the container.

Rename volume in docker compose, with docker derived from postgres

I have this docker-compose
version: '3.7'
services:
app-db:
build:
context: ./
dockerfile: Dockerfile-pg
image: app-pg:1.0.0
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: app
volumes:
- ./docker-entrypoint-initdb.d/init-user-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh
- v-app-pgdata:/var/lib/postgresql
- v-app-pglog:/data/var/log/postgresql
- v-app-pgconf:/etc/postgresql
app-main:
build:
context: ./
dockerfile: Dockerfile-tar-cp
image: app-main:1.0.0
restart: always
ports:
- 80:80
volumes:
v-app-pgdata:
name: v-app-pgdata
v-app-pglog:
name: v-app-pglog
v-app-pgconf:
name: v-app-pgconf
so an app container and a postgres derived container:
#docker build -t app-pg:1.0.0 -f Dockerfile-pg .
#docker run -d --name appC-pg -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=postgres app-pg:1.0.0
FROM postgres:12.1
MAINTAINER xxx
#ARG A_DB_USER='postgres'
#ARG A_DB_PASS='postgres'
ARG A_DB_NAME='app'
ARG A_TZ='Europe/Zurich'
#ENV DB_USER=${A_DB_USER}
#ENV DB_PASS=${A_DB_PASS}
ENV DB_NAME=${A_DB_NAME}
ENV TZ=${A_TZ}
# Adjusting Timezone in the system
RUN echo $TZ > /etc/timezone && \
apt-get update && apt-get install -y tzdata && \
rm /etc/localtime && \
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata && \
apt-get clean
# install postgis
RUN apt-get update && \
apt-get install -y postgis && \
apt-get clean
USER postgres
#Add password "postgres" to user postgres, create db, add .sql
#RUN /etc/init.d/postgresql start && \
# psql --command "ALTER USER ${DB_USER} WITH PASSWORD '${DB_PASS}'; SET TIME ZONE '${TZ}';" && \
# createdb -O ${DB_USER} ${DB_NAME} -E UTF8 && \
# psql -d ${DB_NAME} -c 'CREATE EXTENSION postgis'
EXPOSE 5432
My problem is that postgres default dockerfile have this line:
VOLUME /var/lib/postgresql/data
So even if I create a named Volumes with the same folder, my docker-compose create 4 and not 3 volumes, one unnamed due to that line.
How is it possible solve this issue?
I had the same problem as you and I solved it by editing the docker-compose.yml the following way:
Create a volume for the data
volumes:
v-app-pgdata:
In your declaration of the database service, in the volumes clause, you need to change - v-app-pgdata:/var/lib/postgresql to:
- v-app-pgdata:/var/lib/postgresql/data
The problem was that you where not mounting the volume to the correct place, and therefore, it had to create another volume that is declared in the base image VOLUME /var/lib/postgresql/data. If you run this container without attaching a volume to this mount point, it will automatically create it, and with a random name.
However, it has been pointed out to me that if the OP strictly needs a volume in /var/lib/postgresql then this solution wont work.
Hope this helps, as it worked for me.

PostgreSQL PGDATA from host in Docker-System

I want to run a webapp with docker-compose. I have a centos7 host running postgresql and the docker-engine. My docker-compose includes a postgresql-image and should run with the PGDATA from the host system. But everytime I run the docker-compose I get the error:
initdb: directory "/var/lib/docker-postgresql" exists but is not empty
The docker-compose part for the postgresql-database looks like:
db:
build: ./postgres/
container_name: ps01
volumes:
- ./postgres:/tmp
- /var/lib/pgsql/9.4:/var/lib/docker-postgresql
expose:
- "5432"
environment:
PGDATA: /var/lib/docker-postgresql
I mount the /var/lib/pgsql/9.4 inside the Docker-Postgresql-App to /var/lib/docker-postgresql and set this path to the PGDATA-ENV.
The Dockerfile in ./postgres/ looks like:
FROM postgres:latest
ENV POSTGIS_MAJOR 2.3
ENV POSTGIS_VERSION 2.3.1+dfsg-1.pgdg80+1
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts=$POSTGIS_VERSION \
postgis=$POSTGIS_VERSION \
&& rm -rf /var/lib/apt/lists/*
What should I do to share my Postgres-Data from the host?