I am new into docker, and I have created a container with python + postgres, which runs a python script that collects some data and writes it down on the SQL database. Now, I need to set this job to run each day. And then the nightmare started. I did not manage to create a separate container for this job, so I tried to create a file and copy it into the container via DockerFile (see this one down). I did not manage to run cron as entry-point for the container because then my database was not mounted. So, I create the container, access it, give full permissions to /var/www/html, and create the database table. And then I run cron. No erro, but nothing happens, no log is written on /var/log/cron.log. Here my files:
Dockerfile:
FROM postgres:latest
USER root
RUN apt-get update && apt-get install -y python3 python3-pip
RUN apt-get -y install cron nano
RUN apt-get -y install postgresql-server-dev-10 gcc python3-dev musl-dev
RUN pip3 install psycopg2 \
bs4 \
requests \
pytz
COPY temp-alerts-cron /etc/cron.d/temp-alerts-cron
RUN chmod 0777 /etc/cron.d/temp-alerts-cron
RUN chmod gu+rw /var/run/
RUN chmod gu+s /usr/sbin/cron
RUN touch /var/log/cron.log
RUN chmod 0777 /var/log/cron.log
RUN crontab /etc/cron.d/temp-alerts-cron
USER postgres
EXPOSE 5432
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
The temp-alers-cron:
20 13 * * * root /var/www/html/run.sh >> /var/log/cron.log 2>&1
# Don't remove the empty line at the end of this file. It is required to run the cron job
And the called script:
echo 'inside thingy' >> /var/log/cron.log 2>&1
python3 /var/www/html/nuria_main.py
In case it is needed, here the docker-compose.yml:
services:
postgres:
container_name: 'temp-postgres'
build: # build the image from Dockerfile
context: ${PWD}
volumes: # bind mount volume for Postgres data
- pg-data:/var/lib/postgresql/data
- ./python-app:/var/www/html
restart: unless-stopped
environment:
- POSTGRES_USR=xxadmin
- POSTGRES_DB=tempdb
- POSTGRES_PASSWORD=secret
expose:
- "5432"
networks:
kong:
networks:
kong:
external:
name: kong_net
volumes:
pg-data:
Hope somebody knows what I am doing wrong. I do not get any log or error, so i am lost.
Thanks!
Related
I have a simple docker-compose.yml & associated Dockerfiles that give me a simple dev and prod environment for a nginx-uvicorn-django-postgres stack. I want to add an optional 'backup' container that just runs cron to periodically connect to the 'postgres' container.
# backup container - derived from [this blog][1]
ARG DOCKER_REPO
ARG ALPINE_DOCKER_IMAGE # ALPINE
ARG ALPINE_DOCKER_TAG # LATEST
FROM ${DOCKER_REPO}${ALPINE_DOCKER_IMAGE}:${ALPINE_DOCKER_TAG}
ARG DB_PASSWORD
ARG DB_HOST # "db"
ARG DB_PORT # "5432"
ARG DB_NAME # "ken"
ARG DB_USERNAME # "postgres"
ENV PGPASSWORD=${DB_PASSWORD} HOST=${DB_HOST} PORT=${DB_PORT} PSQL_DB_NAME=${DB_NAME} \
USERNAME=${DB_USERNAME}
RUN printenv
RUN mkdir /output && \
mkdir /output/backups && \
mkdir /scripts && \
chmod a+x /scripts
COPY ./scripts/ /scripts/
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/15min/${DB_NAME}_15
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/daily/${DB_NAME}_day
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/weekly/${DB_NAME}_week
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/monthly/${DB_NAME}_month
RUN apk update && \
apk upgrade && \
apk add --no-cache postgresql-client && \
chmod a+x /etc/periodic/15min/${DB_NAME}_15 && \
chmod a+x /etc/periodic/daily/${DB_NAME}_day && \
chmod a+x /etc/periodic/weekly/${DB_NAME}_week && \
chmod a+x /etc/periodic/monthly/${DB_NAME}_month
The django container is derived from the official Python image and connects (through psycopg2) with values (as ENV value) for host, dbname, username, password and port. The 'backup' container has these same values, but I get this error from the command line:
> pg_dump --host="$HOST" --port="$PORT" --username="$USERNAME" --dbname="$PSQL_DB_NAME"
pg_dump: error: could not translate host name "db" to address: Name does not resolve
Is Alpine missing something relevant that is present in the official Python?
Edit:
I am running with a system of shell scripts that take care of housekeeping for different configurations. so
> ./ken.sh dev_server
will set up the environment variables and then run docker-compose for the project and the containers
docker-compose.yml doesn't explicitly create a network.
I don't know what "db" should resolve to beyond just 'db://'? - its what the django container gets and it is able to resolve a connection to the 'db' service.
service:
db:
image: ${DOCKER_REPO}${DB_DOCKER_IMAGE}:${DB_DOCKER_TAG} #postgres: 14
container_name: ${PROJECT_NAME}_db
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- PGPASSWORD
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
command: ["postgres", "-c", "log_statement=all"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -h db"]
interval: 2s
timeout: 5s
retries: 25
This is the 'dev_server' script run by the parent ken.sh script
function dev_server() {
trap cleanup EXIT
wait_and_launch_browser &
docker-compose -p "${PROJECT_NAME}" up -d --build db nginx web pgadmin backup
echo "Generate static files and copy them into static and file volumes."
source ./scripts/generate_static_files.sh
docker-compose -p "${PROJECT_NAME}" logs -f web nginx backup
}
Update: Worked through "Reasons why docker containers can't talk to each other" and found that all the containers are on a ken_default network, from 170.20.0.2 to 170.20.0.6.
I can docker exec ken_backup backup ken_db -c2, but not from db to backup, because the db container doesn't include ping.
From a shell on backup I cannot ping ken_db - ken_db doesn't resolve, nor does 'db'.
I can't make much of that and I'm not sure what to try next.
You are running the backup container as a separate service.
Docker-compose creates a unique network for each service (docker-compose.yml file).
You need to get the DB and your backup container on the same docker network.
See this post
I am using a postgres image and i need to start ssh service on start.
The problem is that if I run a command in docker-compose file the proccess exits with code 0.
How can I start ssh service but keep postgres serice active too?
DOCKER FILE:
FROM postgres:13
RUN apt update && apt install openssh-server sudo -y
RUN echo 'root:password' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
DOCKER-COMPOSE FILE:
postgres:
container_name: db_postgres
command: sh -c "service ssh start "
image: postgresc
build:
context: ../backend_apollo_server_express
dockerfile: Dockerfile.database
environment:
- "POSTGRES_USER=lims"
- "POSTGRES_PASSWORD=lims"
volumes:
- /home/javier/lims/dockerVolumes/db:/var/lib/postgresql/data
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
ports:
- 5434:5432
You can try to use run postgres after you command
command: sh -c "service ssh start & postgres"
Try
command: sh -c "nohup service ssh start && service postgres start &"
In order to leave the process running in the background. This way the process won't exit
I have the following Dockerfile for a django app:
FROM python:3.6
RUN mkdir /server
WORKDIR /server
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
libsqlite3-dev
RUN pip install -U pip setuptools
RUN pip install --upgrade pip
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
WORKDIR /server/django
ENTRYPOINT ["/bin/bash", "../docker-entrypoint-server"]
relevant docker-compose related to it:
version: '3'
services:
server:
build: .
container_name: server
environment:
SERVER_ENV: ${SERVER_ENV}
DB_AUTH_SOURCE: ${DB_AUTH_SOURCE}
DB_NAME: ${DB_NAME}
DB_HOST: ${DB_HOST}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
networks:
- app
ports:
- 8081:8000
volumes:
- .:/server
command: /bin/bash
tty: true
stdin_open: true
they work like a charm on linux/mac, but not on Windows 10. When the build reaches the COPY instruction, all it copies are the directory and only the first nested directory within it with no content attached to it.
tried to check for the shared C:\ option on docker, didnt work.
tried on powershell with admin rights and nothing.
What could be the possible causes? Why it works on two host OS and not on Win 10?
edit 1: versions
Windows: Windows 10 Education, 1803
Linux: Ubuntu 18.03 LTS
Mac: High Sierra 10.03
Docker: latest version on all OSs
edit 2: the solution
Turns out that unsharing and sharing, pointed out by JDPeckham, revealed the problem: a firewall misconfig caused by an antivirus that was controlling these configs.
This article: https://success.docker.com/article/error-a-firewall-is-blocking-file-sharing-between-windows-and-the-containers is very helpful for troubleshooting those.
I want to deploy a Flask application that uses Orator as the ORM and I'm having problems connecting to a SQL instance in Google Cloud Platform. I've already set up the IAM permissions needed as explained here but I'm still not being able to connect to the instance. If I manually set the firewall permission of the instance's IP the connection succeeds, but if the IP changes (it does several times) I cannot connect anymore.
This is my Dockerfile:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
CMD gunicorn -b :$PORT main:app
This is my app.yaml:
runtime: custom
env: flex
env_variables:
POSTGRES_HOST: <SQL-INSTANCE-IP>
POSTGRES_DB: <MY-POSTGRES-DB>
POSTGRES_USER: <MY-POSTGRES-USER>
POSTGRES_PASSWORD: <MY-POSTGRES-PASSWORD>
automatic_scaling:
min_num_instances: 1
max_num_instances: 1
The problem was that the cloud_sql_proxy was not being executed in my docker image. For this I had to create a script like this:
run_app.sh
#!/bin/bash
/app/cloud_sql_proxy -dir=/cloudsql -instances=<INSTANCE-CONNECTION-NAME> -credential_file=<CREDENTIAL-FILE> &
gunicorn -b :$PORT main:app
Then give it execution permission:
chmod +x run_app.sh
Then changed my Dockerfile so it downloads the cloud_sql_proxy, creates the /cloudsql directory and executes the new_script:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O /app/cloud_sql_proxy
RUN chmod +x /app/cloud_sql_proxy
RUN mkdir /cloudsql; chmod 777 /cloudsql
ADD . /app
CMD /app/run_app.sh
And finally changed the POSTGRES_HOST in my app.yaml:
runtime: custom
env: flex
env_variables:
POSTGRES_HOST: "/cloudsql/<INSTANCE-CONNECTION-NAME>"
POSTGRES_DB: <MY-POSTGRES-DB>
POSTGRES_USER: <MY-POSTGRES-USER>
POSTGRES_PASSWORD: <MY-POSTGRES-PASSWORD>
automatic_scaling:
min_num_instances: 1
max_num_instances: 1
Cheers
When I run the command psql --version within the railsApp container, I get 9.4.12 and when I run the same within the postgres container, I get 9.6.2. How can I get the versions to match?
I am getting the following error when I try to do a migration on Rails App which does a pg_dump sql import.
pg_dump: server version: 9.6.2; pg_dump version: 9.4.12
pg_dump: aborting because of server version mismatch
rails aborted!
Here's my Docker-compose.yml file:
version: "2.1"
services:
railsApp:
build:
context: ./
ports:
- "3000:3000"
links:
- postgres
volumes:
- .:/app
postgres:
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ./.postgres:/var/lib/postgresql
The Dockerfile:
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
python \
rsync \
software-properties-common \
wget \
postgresql-client \
graphicsmagick \
&& rm -rf /var/lib/apt/lists/*
# Install node and npm with nvm
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION v7.2.1
ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION
ENV PATH $NODE_PATH/bin:./node_modules/.bin:$PATH
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Install our ruby dependencies
ADD Gemfile Gemfile.lock /app/
RUN bundle install
# copy the rest of our code over
ADD . /app
ENV RAILS_ENV development
ENV SECRET_KEY_BASE a6bdc5f788624f00b68ff82456d94bf81bb50c2e114b2be19af2e6a9b76f9307b11d05af4093395b0471c4141b3cd638356f888e90080f8ae60710f992beba8f
# Expose port 3000 to the Docker host, so we can access it from the outside.
EXPOSE 3000
# Set the default command to run our server on port 3000
CMD ["rails", "server", "-p", "3000", "-b", "0.0.0.0"]
The same issue for me, I used an alternative way to take a dump,
First I access the terminal and run the pd_dump inside docker and copied the file from docker to host.
Below are the commands
docker exec -it <container-id> /bin/bash # accessing docker terminal
pg_dump > ~/dump # taking dump inside the docker
docker cp <container-id>:/root/dump ~/dump #copying the dump files to host
Hope the above solution helps.
The easiest approach is to use the correct postgres version in the docker-compose. Change:
postgres:
image: postgres:9.6
To:
postgres:
image: postgres:9.4.2
All available versions here.