I am using a postgres image and i need to start ssh service on start.
The problem is that if I run a command in docker-compose file the proccess exits with code 0.
How can I start ssh service but keep postgres serice active too?
DOCKER FILE:
FROM postgres:13
RUN apt update && apt install openssh-server sudo -y
RUN echo 'root:password' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
DOCKER-COMPOSE FILE:
postgres:
container_name: db_postgres
command: sh -c "service ssh start "
image: postgresc
build:
context: ../backend_apollo_server_express
dockerfile: Dockerfile.database
environment:
- "POSTGRES_USER=lims"
- "POSTGRES_PASSWORD=lims"
volumes:
- /home/javier/lims/dockerVolumes/db:/var/lib/postgresql/data
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
ports:
- 5434:5432
You can try to use run postgres after you command
command: sh -c "service ssh start & postgres"
Try
command: sh -c "nohup service ssh start && service postgres start &"
In order to leave the process running in the background. This way the process won't exit
Related
I have a simple docker-compose.yml & associated Dockerfiles that give me a simple dev and prod environment for a nginx-uvicorn-django-postgres stack. I want to add an optional 'backup' container that just runs cron to periodically connect to the 'postgres' container.
# backup container - derived from [this blog][1]
ARG DOCKER_REPO
ARG ALPINE_DOCKER_IMAGE # ALPINE
ARG ALPINE_DOCKER_TAG # LATEST
FROM ${DOCKER_REPO}${ALPINE_DOCKER_IMAGE}:${ALPINE_DOCKER_TAG}
ARG DB_PASSWORD
ARG DB_HOST # "db"
ARG DB_PORT # "5432"
ARG DB_NAME # "ken"
ARG DB_USERNAME # "postgres"
ENV PGPASSWORD=${DB_PASSWORD} HOST=${DB_HOST} PORT=${DB_PORT} PSQL_DB_NAME=${DB_NAME} \
USERNAME=${DB_USERNAME}
RUN printenv
RUN mkdir /output && \
mkdir /output/backups && \
mkdir /scripts && \
chmod a+x /scripts
COPY ./scripts/ /scripts/
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/15min/${DB_NAME}_15
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/daily/${DB_NAME}_day
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/weekly/${DB_NAME}_week
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/monthly/${DB_NAME}_month
RUN apk update && \
apk upgrade && \
apk add --no-cache postgresql-client && \
chmod a+x /etc/periodic/15min/${DB_NAME}_15 && \
chmod a+x /etc/periodic/daily/${DB_NAME}_day && \
chmod a+x /etc/periodic/weekly/${DB_NAME}_week && \
chmod a+x /etc/periodic/monthly/${DB_NAME}_month
The django container is derived from the official Python image and connects (through psycopg2) with values (as ENV value) for host, dbname, username, password and port. The 'backup' container has these same values, but I get this error from the command line:
> pg_dump --host="$HOST" --port="$PORT" --username="$USERNAME" --dbname="$PSQL_DB_NAME"
pg_dump: error: could not translate host name "db" to address: Name does not resolve
Is Alpine missing something relevant that is present in the official Python?
Edit:
I am running with a system of shell scripts that take care of housekeeping for different configurations. so
> ./ken.sh dev_server
will set up the environment variables and then run docker-compose for the project and the containers
docker-compose.yml doesn't explicitly create a network.
I don't know what "db" should resolve to beyond just 'db://'? - its what the django container gets and it is able to resolve a connection to the 'db' service.
service:
db:
image: ${DOCKER_REPO}${DB_DOCKER_IMAGE}:${DB_DOCKER_TAG} #postgres: 14
container_name: ${PROJECT_NAME}_db
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- PGPASSWORD
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
command: ["postgres", "-c", "log_statement=all"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -h db"]
interval: 2s
timeout: 5s
retries: 25
This is the 'dev_server' script run by the parent ken.sh script
function dev_server() {
trap cleanup EXIT
wait_and_launch_browser &
docker-compose -p "${PROJECT_NAME}" up -d --build db nginx web pgadmin backup
echo "Generate static files and copy them into static and file volumes."
source ./scripts/generate_static_files.sh
docker-compose -p "${PROJECT_NAME}" logs -f web nginx backup
}
Update: Worked through "Reasons why docker containers can't talk to each other" and found that all the containers are on a ken_default network, from 170.20.0.2 to 170.20.0.6.
I can docker exec ken_backup backup ken_db -c2, but not from db to backup, because the db container doesn't include ping.
From a shell on backup I cannot ping ken_db - ken_db doesn't resolve, nor does 'db'.
I can't make much of that and I'm not sure what to try next.
You are running the backup container as a separate service.
Docker-compose creates a unique network for each service (docker-compose.yml file).
You need to get the DB and your backup container on the same docker network.
See this post
I want to change the start command of postgres to support SSL in the default docker image
db.Docerfile
FROM postgres:14.5-alpine
COPY ./.docker/dev/init-database.sh /docker-entrypoint-initdb.d/
COPY ./.docker/dev/migrations/database_schema.tar ./
COPY ./.docker/dev/certs/out/server.key /var/lib/postgresql
COPY ./.docker/dev/certs/out/server.crt /var/lib/postgresql
COPY ./.docker/dev/certs/out/myCA.crt /var/lib/postgresql
COPY ./.docker/dev/certs/out/myCA.crl /var/lib/postgresql
COPY ./.docker/dev/certs/out/news_user.key ./
COPY ./.docker/dev/certs/out/news_user.crt ./
RUN chown 0:70 /var/lib/postgresql/server.key && chmod 640 /var/lib/postgresql/server.key
RUN chown 0:70 /var/lib/postgresql/server.crt && chmod 640 /var/lib/postgresql/server.crt
RUN chown 0:70 /var/lib/postgresql/myCA.crt && chmod 640 /var/lib/postgresql/myCA.crt
RUN chown 0:70 /var/lib/postgresql/myCA.crl && chmod 640 /var/lib/postgresql/myCA.crl
RUN chown 0:70 ./news_user.key && chmod 640 ./news_user.key
RUN chown 0:70 ./news_user.crt && chmod 640 ./news_user.crt
RUN chown postgres:postgres /docker-entrypoint-initdb.d/init-database.sh
EXPOSE 5432
CMD postgres -c ssl=on -c ssl_cert_file=/var/lib/postgresql/server.crt -c ssl_key_file=/var/lib/postgresql/server.key -c ssl_ca_file=/var/lib/postgresql/myCA.crt -c ssl_crl_file=/var/lib/postgresql/myCA.crl
When I run this image, I get an error saying
"root" execution of the PostgreSQL server is not permitted.
news_database | The server must be started under an unprivileged user ID to prevent
news_database | possible system security compromise. See the documentation for
news_database | more information on how to properly start the server.
The default postgres image comes with a user named postgres so I tried adding a user postgres line before CMD
Now it gives me a new error saying
postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
Can someone kindly tell me how to fix this so that the postgres command works?
UPDATE 1
I am not running the dockerfile directly, I run it via a docker-compose.yml file since I have a python script that accesses the database in another container
docker.compose.yml
version: "3.8"
services:
news_database:
build:
context: ../..
dockerfile: ./.docker/dev/db.Dockerfile
container_name: news_database
restart: unless-stopped
env_file:
- .env
ports:
- "5432:5432"
volumes:
- news_db:/var/lib/postgresql/data
# https://stackoverflow.com/a/55835081/5371505
# https://stackoverflow.com/questions/72213661/test-connection-to-postgres-with-ssl-with-the-command-line
healthcheck:
test: ["CMD-SHELL", "pg_isready -d 'hostaddr=$DATABASE_HOST user=$DATABASE_USER port=$DATABASE_PORT dbname=$DATABASE_NAME'"]
interval: 5s
timeout: 5s
retries: 5
# Dont add a 'restart' policy to the app because we run it as a cronjob regardless of whether it succeeds or fails
news_app:
# https://stackoverflow.com/questions/65594752/docker-compose-how-to-reference-files-in-other-directories
build:
context: ../..
dockerfile: ./.docker/dev/app.Dockerfile
env_file:
- .env
image: news_app
container_name: news_app
depends_on:
news_database:
condition: service_healthy
volumes:
news_db:
driver: local
I run the above file with this command
docker compose -f ".docker/dev/docker-compose.yml" up -d --build news_database && docker compose -f ".docker/dev/docker-compose.yml" logs --follow
I see a couple of problems with your Dockerfile. First, I've stripped out a few bits to reduce the complexity to something I can reproduce locally, so I have:
FROM postgres:14.5-alpine
COPY ./certs/out/server.key /var/lib/postgresql
COPY ./certs/out/server.crt /var/lib/postgresql
RUN chown 0:70 /var/lib/postgresql/server.key && chmod 640 /var/lib/postgresql/server.key
RUN chown 0:70 /var/lib/postgresql/server.crt && chmod 640 /var/lib/postgresql/server.crt
EXPOSE 5432
CMD postgres -c ssl=on -c ssl_cert_file=/var/lib/postgresql/server.crt -c ssl_key_file=/var/lib/postgresql/server.key -c ssl_ca_file=/var/lib/postgresql/myCA.crt -c ssl_crl_file=/var/lib/postgresql/myCA.crl
If we build a pgtest image from this Dockerfile and run it, we see:
$ docker build -t pgtest .
...
$ docker run --rm pgtest
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
What's going on here? It turns out this is caused by how you've written your CMD directive. You need to use the "exec style", which means passing in a JSON array rather than a string:
CMD ["postgres", \
"-c", "ssl=on", \
"-c", "ssl_cert_file=/var/lib/postgresql/server.crt", \
"-c", "ssl_key_file=/var/lib/postgresql/server.key"]
This is necessary because there's a check in the entrypoint script for
the image that looks like this:
if [ "$1" = 'postgres' ] && ! _pg_want_help "$#"; then
When you write:
CMD postgres -c ...
This wraps the command in a call to sh -c, so $1 will be sh,
thus skipping the important database initialization logic. When we use
the exec format of the CMD statement, $1 will be postgres, as
expected.
With that fix, the image runs correctly:
$ docker run --rm -e POSTGRES_PASSWORD=secret pgtest
.
.
.
PostgreSQL init process complete; ready for start up.
2022-08-28 12:48:48.784 UTC [1] LOG: starting PostgreSQL 14.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2022-08-28 12:48:48.784 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-08-28 12:48:48.784 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-08-28 12:48:48.786 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-08-28 12:48:48.789 UTC [48] LOG: database system was shut down at 2022-08-28 12:48:48 UTC
2022-08-28 12:48:48.792 UTC [1] LOG: database system is ready to accept connections
I am new into docker, and I have created a container with python + postgres, which runs a python script that collects some data and writes it down on the SQL database. Now, I need to set this job to run each day. And then the nightmare started. I did not manage to create a separate container for this job, so I tried to create a file and copy it into the container via DockerFile (see this one down). I did not manage to run cron as entry-point for the container because then my database was not mounted. So, I create the container, access it, give full permissions to /var/www/html, and create the database table. And then I run cron. No erro, but nothing happens, no log is written on /var/log/cron.log. Here my files:
Dockerfile:
FROM postgres:latest
USER root
RUN apt-get update && apt-get install -y python3 python3-pip
RUN apt-get -y install cron nano
RUN apt-get -y install postgresql-server-dev-10 gcc python3-dev musl-dev
RUN pip3 install psycopg2 \
bs4 \
requests \
pytz
COPY temp-alerts-cron /etc/cron.d/temp-alerts-cron
RUN chmod 0777 /etc/cron.d/temp-alerts-cron
RUN chmod gu+rw /var/run/
RUN chmod gu+s /usr/sbin/cron
RUN touch /var/log/cron.log
RUN chmod 0777 /var/log/cron.log
RUN crontab /etc/cron.d/temp-alerts-cron
USER postgres
EXPOSE 5432
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
The temp-alers-cron:
20 13 * * * root /var/www/html/run.sh >> /var/log/cron.log 2>&1
# Don't remove the empty line at the end of this file. It is required to run the cron job
And the called script:
echo 'inside thingy' >> /var/log/cron.log 2>&1
python3 /var/www/html/nuria_main.py
In case it is needed, here the docker-compose.yml:
services:
postgres:
container_name: 'temp-postgres'
build: # build the image from Dockerfile
context: ${PWD}
volumes: # bind mount volume for Postgres data
- pg-data:/var/lib/postgresql/data
- ./python-app:/var/www/html
restart: unless-stopped
environment:
- POSTGRES_USR=xxadmin
- POSTGRES_DB=tempdb
- POSTGRES_PASSWORD=secret
expose:
- "5432"
networks:
kong:
networks:
kong:
external:
name: kong_net
volumes:
pg-data:
Hope somebody knows what I am doing wrong. I do not get any log or error, so i am lost.
Thanks!
I have a Python docker container that needs to wait until another container (postgres server) finishes setup. I tried the standard wait-for-it.sh but several commands weren't included. I tried a basic sleep (again in an sh file) but now it's reporting exec: 300: not found when trying to finally execute the command I'm waiting on.
How do I get around this (preferably without changing the image, or having to extend an image.)
I know I could also just run a Python script, but ideally I'd like to use wait-for-it.sh to wait for the server to turn up rather than just sleep.
Dockerfile (for stuffer):
FROM python:2.7.13
ADD ./stuff/bin /usr/local/bin/
ADD ./stuff /usr/local/stuff
WORKDIR /usr/local/bin
COPY requirements.txt /opt/updater/requirements.txt
COPY internal_requirements.txt /opt/stuff/internal_requirements.txt
RUN pip install -r /opt/stuff/requirements.txt
RUN pip install -r /opt/stuff/other_requirements.txt
docker-compose.yml:
version: '3'
services:
local_db:
build: ./local_db
ports:
- "localhost:5432:5432"
stuffer:
build: ./
depends_on:
- local_db
command: ["./wait-for-postgres.sh", "-t", "300", "localhost:5432", "--", "python", "./stuffing.py", "--file", "./afile"]
Script I want to use (but can't because no psql or exec):
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do >&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Sergey's comment. I had wrong argument order. This issue had nothing to do with docker and everything to do with my inability to read.
I made an example so you can see it working:
https://github.com/nitzap/wait-for-postgres
On the other hand also you can have errors inside the execution of the script to validate that the service is working. You should not refer as localhost .... because that is within the contexts of containers, if you want to point to another container has to be through the name of the service.
There is a way to link /data/db directory of the container to your localhost. But I can not find anything about configuration. How to link /etc/mongo.conf to anything from my local file system. Or maybe some other approach is used. Please share your experience.
I'm using the mongodb 3.4 official docker image. Since the mongod doesn't read a config file by default, this is how I start the mongod service:
docker run -d --name mongodb-test -p 37017:27017 \
-v /home/sa/data/mongod.conf:/etc/mongod.conf \
-v /home/sa/data/db:/data/db mongo --config /etc/mongod.conf
removing -d will show you the initialization of the container
Using a docker-compose.yml:
version: '3'
services:
mongodb_server:
container_name: mongodb_server
image: mongo:3.4
env_file: './dev.env'
command:
- '--auth'
- '-f'
- '/etc/mongod.conf'
volumes:
- '/home/sa/data/mongod.conf:/etc/mongod.conf'
- '/home/sa/data/db:/data/db'
ports:
- '37017:27017'
then
docker-compose up
When you run docker container using this:
docker run -d -v /var/lib/mongo:/data/db \
-v /home/user/mongo.conf:/etc/mongo.conf -p port:port image_name
/var/lib/mongo is a host's mongo folder.
/data/db is a folder in docker container.
I merely wanted to know the command used to specify a config for mongo through the docker run command.
First you want to specify the volume flag with -v to map a file or directory from the host to the container. So if you had a config file located at /home/ubuntu/ and wanted to place it within the /etc/ folder of the container you would specify it with the following:
-v /home/ubuntu/mongod.conf:/etc/mongod.conf
Then specify the command for mongo to read the config file after the image like so:
mongo -f /etc/mongod.conf
If you put it all together, you'll get something like this:
docker run -d --net="host" --name mongo-host -v /home/ubuntu/mongod.conf:/etc/mongod.conf mongo -f /etc/mongod.conf
For some reason I should use MongoDb with VERSION:3.0.1
Now : 2016-09-13 17:42:06
That is what I found:
#first step: run mongo 3.0.1 without conf
docker run --name testmongo -p 27017:27017 -d mongo:3.0.1
#sec step:
docker exec -it testmongo cat /entrypoint.sh
#!/bin/bash
set -e
if [ "${1:0:1}" = '-' ]; then
set -- mongod "$#"
fi
if [ "$1" = 'mongod' ]; then
chown -R mongodb /data/db
numa='numactl --interleave=all'
if $numa true &> /dev/null; then
set -- $numa "$#"
fi
exec gosu mongodb "$#"
fi
exec "$#"
I find that there are two ways to start a mongod service.
What I try:
docker run --name mongo -d -v your/host/dir:/container/dir mongo:3.0.1 -f /container/dir/mongod.conf
the last -f is the mongod parameter, you can also use --config instead.
make sure the path like your/host/dir exists and the file mongod.conf in it.