Docker container not starting when connect to postgresql external - postgresql

I have docker container with redmine in it and I have postgresql-95 running on my host machine, and I want to connect my redmine container to my postgresql. I followed this step : https://github.com/sameersbn/docker-redmine, to connect my container with external postgresql.
assuming my host machine with ip 192.168.100.6, so I ran this command :
docker run --name redmine -it -d \
--publish=10083:80 \
--env='REDMINE_PORT=10083' \
--env='DB_ADAPTER=postgresql' \
--env='DB_HOST=192.168.100.6' \
--env='DB_NAME=redmine_production' \
--env='DB_USER=redmine' --env='DB_PASS=password' \
redmine-docker
The container running for 1 minute but suddenly it stopped even before it runs nginx and redmine in it. I Need help about this configuration. Thank you.

Related

How to install multiple PG Bouncers in single servers with different pool_mode

How can we Install multiple PG bouncer with different pool mode on a single server(Ubuntu18.04)?
when I tried to second time install it says already installed?
Is there any other way to install with a different port?
You could install a container rungime (eg. docker) and run multiple containers, each containing a pgbouncer installation, eg. using this image: https://github.com/edoburu/docker-pgbouncer
First, install docker:
sudo apt install docker.io
Then, start can start as many pgbouncers as you like.
pgbouncer-1:
sudo docker run --rm -d \
-n pgbouncer-session\
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=session \
-p 5432:5432
edoburu/pgbouncer
pgbouncer-2
sudo docker run --rm -d \
-n pgbouncer-transaction \
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=transaction \
-p 5433:5432
edoburu/pgbouncer
Note that the the containers use different ports on the host (first one uses 5432, second one uses 5433).
If you have lots of configuration, you might want to use a bind-mount for the configuration files.
Also, for a steady setup, I would recommend using docker-compose instead of raw docker commands.

How to run pg_rewind in postgresql docker container

Got the same question. Running PostgreSQL replication clusters in docker container with the official postgresql docker image, now is trying to work out an approach to do failover.
When running pg_rewind against the previous primary container without stopping PostgreSQL service, the failure occurs:
pg_rewind: fatal: target server must be shut down cleanly
But if I run:
docker exec <container-name> pg_ctl stop -D <datadir>
The container is restarted because of the restart policy unless-stopped.
Found the answer by myself.
Just stop the existing container and run command in a new container using the same image and volume mounts, something like:
docker run -it --rm --name pg_rewind --network=host \
--env="PGDATA=/var/lib/postgresql/data/pgdata" \
--volume=/var/lib/postgresql/data:/var/lib/postgresql/data:rw \
--volume=/var/lib/postgresql:/var/lib/postgresql:rw \
--volume=/var/run/postgresql:/var/run/postgresql:rw \
postgres:12.4 \
pg_rewind --target-pgdata=/var/lib/postgresql/data/pgdata --source-server='host=10.0.0.55 port=5432 dbname=postgres user=replicator password=password'

What is the best way to install tensorflow and mongodb in docker?

I want to create a docker container or image and have tensorflow and mongodb installed, I have seen that there are docker images for each application, but I need them to be working together, from a mongodb database I must extract the data to feed a model created in tensorflow.
Then I want to know if it is possible to have a configuration like that, since I have tried with a ubuntu container and inside it to install the applications I need, but I don't know if there is another way to do it.
Thanks.
Interesting that I find this post, and just found one solution for myself. Maybe not the one for you, BTW.
What I did is: docker pull mongo and run as daemon:
#!/bin/bash
export VOLUME='/home/user/code'
docker run -itd \
--name mongodb \
--publish 27017:27017 \
--volume ${VOLUME}:/code \
mongo
Here
the 'd' in '-itd' means running as daemon (like service, not
interactive).
The --volume may not be used.
Then docker pull tensorflow/tensorflow and run it with:
#!/bin/bash
export VOLUME='/home/user/code'
docker run \
-u 1000:1000 \
-it --rm \
--name tensorflow \
--volume ${VOLUME}:/code \
-w /code \
-e HOME=/code/tf_mongodb \
tensorflow/tensorflow bash
Here
the -u make docker bash with same ownership as host machine;
the --volume make host folder /home/user/code mapping to /code in docker;
the -w work make docker bash start from /code, which is /home/user/code in host;
the -e HOME= option sign bash $HOME folder such that later you can pip install.
Now you have bash prompt such that you can
create virtual env folder under /code (which is mapping to /home/user/code),
activate venv,
pip install pymongo,
then you can connect to mongodb you run in docker (localhost may not work, please use host IP address).

Install pgadmin III for a linked database container in docker

There are two running docker containers. One container containing a web application and the second is a linked postgres database.
Where should the Pgadmin III tool be installed?
pgAdmin can be deployed
in a container using the image at hub.docker.com/r/dpage/pgadmin4/
E.g. to run a TLS secured container using a shared config/storage directory in /private/var/lib/pgadmin on the host, and servers pre-loaded from /tmp/servers.json on the host:
docker pull dpage/pgadmin4
docker run -p 443:443 \
-v /private/var/lib/pgadmin:/var/lib/pgadmin \
-v /path/to/certificate.cert:/certs/server.cert \
-v /path/to/certificate.key:/certs/server.key \
-v /tmp/servers.json:/pgadmin4/servers.json \
-e 'PGADMIN\_DEFAULT\_EMAIL=user#domain.com' \
-e 'PGADMIN\_DEFAULT\_PASSWORD=SuperSecret' \
-e 'PGADMIN\_ENABLE\_TLS=True' \
-d dpage/pgadmin4

Docker: Use sockets for communication between 2 containers

I have 2 Docker containers: App & Web.
App — simple container with php application code. It is used only for storage and deliver the code to the remote Docker host.
App image Dockerfile:
FROM debian:jessie
COPY . /var/www/app/
VOLUME ["/var/www/app"]
CMD ["true"]
Web — web service container, consist of PHP-FPM + Nginx.
Web image Dockerfile:
FROM nginx
# Remove default nginx configs.
RUN rm -f /etc/nginx/conf.d/*
# Install packages
RUN apt-get update && apt-get install -my \
supervisor \
curl \
wget \
php5-cli \
php5-curl \
php5-fpm \
php5-gd \
php5-memcached \
php5-mysql \
php5-mcrypt \
php5-sqlite \
php5-xdebug \
php-apc
# Ensure that PHP5 FPM is run as root.
RUN sed -i "s/user = www-data/user = root/" /etc/php5/fpm/pool.d/www.conf
RUN sed -i "s/group = www-data/group = root/" /etc/php5/fpm/pool.d/www.conf
# Pass all docker environment
RUN sed -i '/^;clear_env = no/s/^;//' /etc/php5/fpm/pool.d/www.conf
# Add configuration files
COPY config/nginx.conf /etc/nginx/
COPY config/default.vhost /etc/nginx/conf.d
COPY config/supervisord.conf /etc/supervisor/conf.d/
COPY config/php.ini /etc/php5/fpm/conf.d/40-custom.ini
VOLUME ["/var/www", "/var/log"]
EXPOSE 80 443 9000
ENTRYPOINT ["/usr/bin/supervisord"]
My question: Is it possible to link Web container and App container by the socket?
The main reason for this - using App container for deploy updated code to remote Docker host.
Using volumes/named volumes for share code between containers is not a good idea. But Sockets can help.
Thank you very much for your help and support!
If both containers run on the same host, it's possible to share a socket between the two as they are plain files.
You can create a local docker volume and mount that volume on both containers. Then configure you program(s) to use that path.
docker volume create --name=phpfpm
docker run phpfpm:/var/phpfpm web
docker run phpfpm:/var/phpfpm app
If the socket can be generated on the host you can mount the file into both containers. This is the method used to get a docker container to control the hosts docker.
docker run -v /var/container/some.sock:/var/run/some.sock web
docker run -v /var/container/some.sock:/var/run/some.sock app