CentOS with Systemd on Docker - postgresql

I am actually working on automated testing of my playbooks with Gitlab-CI, Ubuntu is working very well and getting no issues.
The Problem actually I have is with CentOS and Systemd, first of all the Playbook ( installing Postgres 9.5 inside CentOS7):
- name: Ensure PostgreSQL is running
service:
name: postgresql-9.5
state: restarted
ignore_errors: true
when:
- ansible_os_family == 'RedHat'
so, and this is what I get if i want to start postgres inside the container:
Failed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\n
I already had to run the Container in privileged mode, with c-groups and anything else. Already tried differend Docker Containers but nothing is working.

When using docker, i think it would be better just use postgres to start the server.
Command like
postgres -D /opt/postgresql/data/ > /var/log/postgresql/pg_server.log 2>&1 &

When you use docker, you don't have a fully functional systemd.
You can use the solution suggested by #KJocker to make a postgresql functional container. Or instead you can configure systemd to work inside the container here is a document check

I had the same thing when using Ansible on a Docker container.... and I have written a docker-systemctl-replacement for that. It works for PostgreSQL - no need to change the Ansible script, it can stay that way for a deployment on a real machine.

Edit the conf of your gitlab runner instance /etc/gitlab-runner/config.toml
From :
[runners.docker]
privileged = false
volumes = ["/cache"]
To :
[runners.docker]
privileged = true
volumes = ["/sys/fs/cgroup:/sys/fs/cgroup:ro", "/cache"]
Add :
[runners.docker.tmpfs]
"/run" = "rw"
"/tmp" = "rw"
[runners.docker.services_tmpfs]
"/run" = "rw"
"/tmp" = "rw"
Restart gitlab-runner.
On your docker image, edit getty tty1 service to permit autologin of user root after systemd boot up
sed -e 's|/sbin/agetty |/sbin/agetty -a root |g' -i /etc/systemd/system/getty.target.wants/getty\#tty1.service
Use that docker image in image name section of .gitlab-ci.yml and add the following to start systemd. Do not edit entrypoint
script:
- /lib/systemd/systemd --system --log-target=kmsg &
- sleep 5
- systemctl start postgresql-9.5

Related

How to use an init container to check if MySQL is ready for connections?

Friends, I am learning here and trying to implement an init container which checks if MySQL is ready for connections. I have a pod running MySQL and another pod with an app which will connect to the MysSQL pod when its ready.
No success so far. The error I am getting is the following: sh: mysql: not found. This is how I am trying:
initContainers:
- name: {{ .Values.initContainers.name }}
image: {{ .Values.initContainers.image }}
command:
- "sh"
- "-c"
- "until mysql --host=mysql.default.svc.cluster.local --user={MYSQL_USER}
--password={MYSQL_PASSWORD} --execute=\"SELECT 1;\"; do echo waiting for mysql; sleep 2; done;"
Any idea how I could make this work?
Please try using this.
initContainers:
- name: init-cont
image: busybox:1.31
command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";']
Do absolutely nothing; even delete the initContainers: you have now.
Let's say your application starts up before the database is ready. It tries to connect, fails, and exits. Kubernetes notices this, so it will try to start the application again; if it fails repeatedly, it will start waiting progressively longer between retries (you will see CrashLoopBackOff state). Eventually the cluster will have waited long enough that the database is ready, and the next retry will be successful.
Note that this is probably the same thing that will happen if the database restarts after the application is already running; "at startup time" doesn't need to be a special case.
Your initContainer is missing the mysql client binary.
There are a few ways to solve this:
Use an official mysql docker image, and use what you have as the command.
Create your own docker image from a base image (e.g. Alpine), install mysql-client, and then modify your initContainer to work with Alpine.
Assuming Alpine is your base image you would have a dockerfile like this:
FROM alpine:3.14
RUN apk add mysql-client
Then your initContainer command would be:
command:
- "/bin/ash"
- "-c"
- "until mysql --host=mysql.default.svc.cluster.local --user={MYSQL_USER}
--password={MYSQL_PASSWORD} --execute=\"SELECT 1;\"; do echo waiting for mysql; sleep 2; done;"
My recommendation, use the official mysql docker image
make sure you have installed mysql in your init container . it's seems like you haven't install mysql in your init container. as you are trying to connect as a client from another container rather than mysql. you also need mysql installed in your init container. you can use a mysql image or just install mysql in your container when you are building your docker file of init container.

selinux not working under containerd with selinux-enable=true

I have two k8s cluster, one using docker and another using containerd directly, both with selinux enabled.
but I found selinux not actually working on the containerd one, although this two cluster have the same version of containerd and runc.
did i miss some setting with containerd?
docker: file label is container_file_t, and process runs as container_t, selinux works fine
K8s version: 1.17
Docker version: 19.03.6
Containerd version: 1.2.10
selinux enable by adding ["selinux-enabled": true] to /etc/docker/daemon.json
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:container_t:s0:c655,c743 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_file_t:s0:c655,c743 /usr/local/openjdk-8/bin/java
containerd: file label is container_var_lib_t, and process runs as spc_t, selinux makes no sense
K8s version: 1.15
Containerd version: 1.2.10
selinux enable by setting [enable_selinux = true] in /etc/containerd/config.toml
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:spc_t:s0 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_var_lib_t:s0 /usr/local/openjdk-8/bin/java
// seems run as spc_t is correct
# sesearch -T -t container_var_lib_t | grep spc_t
type_transition container_runtime_t container_var_lib_t : process spc_t;
From this issue we can read:
Containerd includes minimal support for SELinux. More accurately, it
contains support to run ON systems using SELinux, but it does not make
use of SELinux to improve container security.
All containers run with the
system_u:system_r:container_runtime_t:s0 label, but no further
segmentation is made
There is no full support for what you are doing using Containerd. Your approach is correct but the problem is lack of support to this functionality.

Cron in postgresql:alpine docker container

I am using the "plain" postgresql:alpine docker image, but have to schedule a database backup daily. I think this is a pretty common task.
I created a script backupand stored in the container in /etc/periodic/15min, and made it executable:
bash-4.4# ls -l /etc/periodic/15min/
total 4
-rwxr-xr-x 1 root root 95 Mar 2 15:44 backup
I tried executing it manually, that works fine.
My problem is getting crond to run automatically.
If I exec docker exec my-postgresql-container crond, the deamon is started and cron works, but I would like to embed this into my Dockerfile
FROM postgres:alpine
# my backup script, MUST NOT have .sh extension
COPY backup.sh /etc/periodic/15min/backup
RUN chmod a+x /etc/periodic/15min/backup
RUN crond # <- doesn't work
I have no idea how to rewrite or overwrite the commands in the official image. For update reasons I also would like to stay on these images, if possible.
Note: This option if you would like to use the same container with multiple service
Install Supervisord which will makes you able to run crond and postgresql. The Dockerfile will be as the following:
FROM postgres:alpine
RUN apk add --no-cache supervisor
RUN mkdir /etc/supervisor.d
COPY postgres_cron.ini /etc/supervisor.d/postgres_cron.ini
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
And postgres_cron.ini will be as the following:
[supervisord]
logfile=/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
loglevel=info ; (log level;default info; others: debug,warn,trace)
nodaemon=true ; (start in foreground if true;default false)
[program:postgres]
command=/usr/local/bin/docker-entrypoint.sh postgres
autostart=true
autorestart=true
[program:cron]
command =/usr/sbin/crond -f
autostart=true
autorestart=true
Then you can start the docker build process and run a container from your new image. Feel free to modify the Dockerfile or postgres_cron.ini as needed
I had the exact same problem a few month ago. The key aspect is that a container can have only one main process defined by the ENTRYPOINT and/or CMD in your Dockerfile.
You cannot just swap out postgres with crond otherwise your database isn't running. It is generally recommended to separate areas of concern by using one service per container.
With that in mind either use a separate container which runs nothing but crond and thus Docker can both track its lifecycle, and restart it when/if it fails, the machine restarts, etc.
Or run the jobs via cron on your host using docker exec.
The third and in my opinion best (but also advanced) solution is pg_cron. It is an postgres extension and therefore runs the jobs in the same database container. Your challenge would be to adapt the configuration and installation of it.
The easy part should be the
postgresql.conf:
# add to postgresql.conf:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'postgres'
Next, you need to add the pg_cron extension to your image by adjusting the Dockerfile, which you can derive from the official alpine postgres image. The installation of it is described here.

Docker container mongod error when starting via ssh

I have installed mongodb on a docker container together with openssh on ubuntu 14.04. The container is running with ssh but when I ssh into the container I get the following error when trying to start mongod.
root#430f9502ba2d:~# service mongod start
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mongod start
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start mongod
Also start mongod does not affect anything.
Tried looking at this also Mongo daemon doesn't run by service mongod start without it helping.
mongod --config /your/path/to/mongod.conf doesn't seem to work also, just locks up.
The error below is standard as of course there is no mongod server running.
root#430f9502ba2d:/# mongo
MongoDB shell version: 2.6.9
connecting to: test
2015-05-07T20:49:56.213+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-05-07T20:49:56.214+0000 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
The problem here is your approach. Docker does not have an init system like you are used to on traditional systems. What docker does is replace PID 1 with the process you specify in the CMD or ENTRYPOINT Dockerfile commands. For now, ignore ENTRYPOINT, because it replaces what your CMD is run with (normally, it's /bin/sh -c). You need to instruct docker to start your mongod service in your Dockerfile with the CMD command, like:
CMD usr/bin/mongod
And when you run your container, mongod will be your PID 1. Now, you're probably wondering at this point "But what about my SSH server?" and the answer is: Don't run an SSH server on your docker containers. There are some use cases where running an SSH server is okay, but almost all of the "normal" reasons (debug, C&C, etc) are nullified with the "best practice" for getting a shell on your container:
docker exec -it myContainer /bin/bash
This will drop you into a shell on your running container. The recommendation here for managing configuration and changes in your docker container is to use something like Ansible. However, remember that docker containers are ephemeral, and you shouldn't be restarting services and changing configuration state on them. If you need a config change, change the Dockerfile or config data, and then start a new container. Good luck! Here is a little more information on Dockerizing MongoDB, but keep in mind that the method described there alters the ENTRYPOINT in the Dockerfile, which is a little more involved and requires a better understanding of what's going on in Dockerfiles.
This is really helpful. I was trying to make old Ansible playbooks work with Docker by creating several blank containers and let Ansible do the rest.
It works through command
mongod --dbpath /var/lib/mongodb --smallfiles

How can I wait for a docker container to be up and running?

When running a service inside a container, let's say mongodb, the command
docker run -d myimage
will exit instantly, and return the container id.
In my CI script, I run a client to test mongodb connection, right after running the mongo container.
The problem is: the client can't connect because the service is not up yet.
Apart from adding a big sleep 10in my script, I don't see any option to wait for a container to be up and running.
Docker has a command wait which doesn't work in that case, because the container doesn't exist.
Is it a limitation of docker?
Found this simple solution, been looking for something better but no luck...
until [ "`docker inspect -f {{.State.Running}} CONTAINERNAME`"=="true" ]; do
sleep 0.1;
done;
or if you want to wait until the container is reporting as healthy (assuming you have a healthcheck)
until [ "`docker inspect -f {{.State.Health.Status}} CONTAINERNAME`"=="healthy" ]; do
sleep 0.1;
done;
As commented in a similar issue for docker 1.12
HEALTHCHECK support is merged upstream as per docker/docker#23218 - this can be considered to determine when a container is healthy prior to starting the next in the order
This is available since docker 1.12rc3 (2016-07-14)
docker-compose is in the process of supporting a functionality to wait for specific conditions.
It uses libcompose (so I don't have to rebuild the docker interaction) and adds a bunch of config commands for this. Check it out here: https://github.com/dansteen/controlled-compose
You can use it in Dockerfile like this:
HEALTHCHECK --interval=5m --timeout=3s \
CMD curl -f http://localhost/ || exit 1
Official docs: https://docs.docker.com/engine/reference/builder/#/healthcheck
If you don't want to expose the ports, as is the case if you plan to link the container and might be running multiple instances for testing, then I found this was a good way to do it in one line :) This example is based on waiting for ElasticSearch to be ready:
docker inspect --format '{{ .NetworkSettings.IPAddress }}:9200' elasticsearch | xargs wget --retry-connrefused --tries=5 -q --wait=3 --spider
This requires wget to be available, which is standard on Ubuntu. It will retry 5 times, 3 seconds between tries, even if the connection is refused, and also does not download anything.
If the containerized service you started doesn't necessarily respond well to curl or wget requests (which is quite likely for many services) then you could use nc instead.
Here's a snippet from a host script which starts a Postgres container and waits for it to be available before continuing:
POSTGRES_CONTAINER=`docker run -d --name postgres postgres:9.3`
# Wait for the postgres port to be available
until nc -z $(sudo docker inspect --format='{{.NetworkSettings.IPAddress}}' $POSTGRES_CONTAINER) 5432
do
echo "waiting for postgres container..."
sleep 0.5
done
Edit - This example does not require that you EXPOSE the port you are testing, since it accesses the Docker-assigned 'private' IP address for the container. However this only works if the docker host daemon is listening on the loopback (127.x.x.x). If (for example) you are on a Mac and running the boot2docker VM, you will be unable to use this method since you cannot route to the 'private' IP addresses of the containers from your Mac shell.
Assuming that you know the host+port of your MongoDB server (either because you used a -link, or because you injected them with -e), you can just use curl to check if the MongoDB server is running and accepting connections.
The following snippet will try to connect every second, until it succeeeds:
#!/bin/sh
while ! curl http://$DB_PORT_27017_TCP_ADDR:$DB_PORT_27017_TCP_PORT/
do
echo "$(date) - still trying"
sleep 1
done
echo "$(date) - connected successfully"
I've ended up with something like:
#!/bin/bash
attempt=0
while [ $attempt -le 59 ]; do
attempt=$(( $attempt + 1 ))
echo "Waiting for server to be up (attempt: $attempt)..."
result=$(docker logs mongo)
if grep -q 'waiting for connections on port 27017' <<< $result ; then
echo "Mongodb is up!"
break
fi
sleep 2
done
Throwing my own solution out there:
I'm using docker networks so Mark's netcat trick didn't work for me (no access from the host network), and Erik's idea doesn't work for a postgres container (the container is marked as running even though postgres isn't yet available to connect to). So I'm just attempting to connect to postgres via an ephemeral container in a loop:
#!/bin/bash
docker network create my-network
docker run -d \
--name postgres \
--net my-network \
-e POSTGRES_USER=myuser \
postgres
# wait for the database to come up
until docker run --rm --net my-network postgres psql -h postgres -U myuser; do
echo "Waiting for postgres container..."
sleep 0.5
done
# do stuff with the database...
If you want to wait for an opened port, you can use this simple script:
until </dev/tcp/localhost/32022; do sleep 1; done
For wait until port 32022 be able to connect.
I had to tackle this recetly and came up with an idea. When doing research for this task I got here, so I thought I'd share my solution with future visitors of this post.
Docker-compose-based solution
If you are using docker-compose you can check out my docker synchronization POC. I combined some of the ideas in other questions (thanks for that - upvoted).
The basic idea is that every container in the composite exposes a diagnostic service. Calling this service checks if the required set of ports is open in the container and returns the overall status of the container (WARMUP/RUNNING as per the POC). Each container also has an utility to check upon startup if the dependant services are up and running. Only then the container starts up.
In the example docker-compose environment there are two services server1 and server2 and the client service which waits for both servers to start then sends a request to both of them and exits.
Excerpt from the POC
wait_for_server.sh
#!/bin/bash
server_host=$1
sleep_seconds=5
while true; do
echo -n "Checking $server_host status... "
output=$(echo "" | nc $server_host 7070)
if [ "$output" == "RUNNING" ]
then
echo "$server_host is running and ready to process requests."
break
fi
echo "$server_host is warming up. Trying again in $sleep_seconds seconds..."
sleep $sleep_seconds
done
Waiting for multiple containers:
trap 'kill $(jobs -p)' EXIT
for server in $DEPENDS_ON
do
/assets/wait_for_server.sh $server &
wait $!
done
Diagnostic srervice basic implementation (checkports.sh):
#!/bin/bash
for port in $SERVER_PORT; do
nc -z localhost $port;
rc=$?
if [[ $rc != 0 ]]; then
echo "WARMUP";
exit;
fi
done
echo "RUNNING";
Wiring up the diagnostic service to a port:
nc -v -lk -p 7070 -e /assets/checkports.sh
test/test_runner
#!/usr/bin/env ruby
$stdout.sync = true
def wait_ready(port)
until (`netstat -ant | grep #{port}`; $?.success?) do
sleep 1
print '.'
end
end
print 'Running supervisord'
system '/usr/bin/supervisord'
wait_ready(3000)
puts "It's ready :)"
$ docker run -v /tmp/mnt:/mnt myimage ruby mnt/test/test_runner
I'm testing like this whether the port is listening or not.
In this case I have test running from inside container, but it's also possible from outside whether mongodb is ready or not.
$ docker run -p 37017:27017 -d myimage
And check whether the port 37017 is listening or not from host container.
You can use wait-for-it, "a pure bash script that will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services, such as linked docker containers. Since it is a pure bash script, it does not have any external dependencies".
However, you should try to design your services to avoid these kind of interdependencies between services. Can your service try to reconnect to the database? Can you let your container just die if it can't connect to the database and let a container orchestrator (e.g. Docker Swarm) do it for you?
Docker-compose solution
After docker-compose I dont know name of docker container, so I use
docker inspect -f {{.State.Running}} $(docker-compose ps -q <CONTAINER_NAME>)
and checking true like here https://stackoverflow.com/a/33520390/7438079
In order to verify if a PostgreSQL or MySQL (currently) Docker container is up and running (specially for migration tools like Flyway), you can use the wait-for binary: https://github.com/arcanjoaq/wait-for.
For mongoDB docker instance we did this and works like a charm:
#!/usr/bin/env bash
until docker exec -i ${MONGO_IMAGE_NAME} mongo -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD}<<EOF
exit
EOF
do
echo "Waiting for Mongo to start..."
sleep 0.5
done
Here is what I ended up with which is similar to a previous answer just a little more concise,
until [[ $(docker logs $db_container_name) == *"waiting for connections on port 27017"* ]]
do
echo "waiting on mongo to boot..."
sleep 1
done
1 : A container attached to a service with docker-compose doesn't launch when a Synology NAS starts up.
I had a problem launching a docker container on a Synology NAS that was attached to another container via docker-compose like this:
...
---
version: "3"
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
...
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
# Connect the service to gluetun
network_mode: "service:gluetun"
...
The docker used by Synology is different or not up to date and apparently does not appreciate that a container is attached to another container with network_mode, the Synology docker considers that the container is not attached to any network and therefore can not launch the container. However in command line it works very well so I wanted to make a script to launch it automatically at the startup of my NAS by a scheduled task.
note : I creat my docker compose with portainer
2 : The until loop does not work even with all the different ways of writing the condition.
If like me on your Synology NAS you did not manage to make the until loop work as described superhero : here you will have to go through the while loop.
however with the -x argument of bash to debug my code the String comparison was well done:
output line (same with all ways of describing the expression):
...
+ [' false = true ']'
...
No matter what the result, nothing worked, I checked every time and there was always a moment when it did not work as I wanted.
4: THE SOLUTION FOR SYNOLOGY
Environment
DSM : 7.1.1
bash : 4.4.23
docker : 20.10.3
After finding the right syntax, we had to solve another following problem:
The docker container status check can only work if the synology docker package is running.
so i used synopkg with is_onoff, is_active doesn't work and status was giving too much string. so my solution gave this :
#!/bin/bash
while [ "$(synopkg is_onoff Docker)" != "package Docker is turned on" ]; do
sleep 0.1;
done;
echo "Docker package is running..."
echo ""
while [ "$(docker inspect -f {{.State.Running}} gluetun)" = "false" ]; do
sleep 0.1;
done;
echo "gluetun is running..."
echo ""
if [ "$(docker ps -a -f status=exited -f name=qbittorrent --format '{{.Names}}')" ]; then
echo "Qbittorrent is not running I try to start this container"
docker start qbittorrent
else
echo "Qbittorrent docker is already started"
fi
So I was able to do a scheduled task with the root user at Boot-Up in the DSM configurations and it worked fine after a reboot, without checking the Synology Docker package launch status with synopkg it did not work.
NOTE
I think the version of Bash in DSM doesn't like the until loop or it is misinterpreted. Maybe this solution can work with systems where bash is in an older version and for X reasons you can't update it or you don't want to update the binaries of Bash to avoid breaking your system.