How to configure the 'smtp ' settings on docker/redmine container instance? - email

Install & run steps:
docker pull redmine:latest // 3.3.3
docker run -d -p 10030:3000 -v /opt/yunpan01/redmine:/usr/src/redmine --name redmine redmine:latest
redmine started successfully.
But the smtp settings was not Configured.
Please tell me how to configure the SMTP settings.

You can modify configuration.yml by two ways: 1. use command docker cp FILEPATH_IN_YOUR_HOST CONTAINER_ID:PATH_IN_YOUR_CONTAINER eg. vi configuration.yml, then type in these codes:
default:
email_delivery:
delivery_method: :smtp
smtp_settings:
address: "smtp.yourcompany.com"
port: 25
then docker cp configuration.yml redmine:/usr/src/redmine/config/configuration.yml
2. volumnsmapping the container's directory to your host path
docker run -d -p 10030:3000 -v /opt/yunpan01/redmine:/usr/src/redmine/files --name redmine redmine:latest
vi /opt/yunpan01/redmine/configuration.yml
same step as above to key in the configuration.yml
default:
email_delivery:
delivery_method: :smtp
smtp_settings:
address: "smtp.yourcompany.com"
port: 25
now you can make a file link in container
docker exec redmine ln -s \
/usr/src/redmine/files/configuration.yml\
/usr/src/redmine/config/configuration.yml

It seems that redmine official image do not provide this kind of configurations.
You may want to try sameersbn/redmine or bitnami/redmine

Enter into container by
sudo docker exec -i -t
/bin/bash
copy or rename /usr/src/redmine/config/configuration.yml.example in /usr/src/redmine/config/configuration.yml
map this file in a volume
redmine:
image: redmine
ports:
- "8089:3000"
volumes:
- '/home/ubuntu/docker/redmine/allfiles:/usr/src/redmine/files'
- '/home/ubuntu/docker/redmine/configuration.yml:/usr/src/redmine/config/configuration.yml'
Set email configuration on this file, as explained here

You have to enter your smtp settings in the configuration.yml file.
See this link: http://www.redmine.org/projects/redmine/wiki/EmailConfiguration

Related

Jconsole not connecting with jmx port of Keycloak

I tried to enable jmx to check on cache statistics, I tried to do this in local setup with following command
docker run -it --rm --name keycloak
--cap-add SYS_ADMIN
-p 8080:8080
-p 8787:8787
-p 8999:8999
-e KEYCLOAK_ADMIN="keycloak"
-e KEYCLOAK_ADMIN_PASSWORD="keycloak"
-e DEBUG="true"
-e DEBUG_PORT="*:8787"
-e JAVA_OPTS_APPEND="-Xmx1g
-Dcom.sun.management.jmxremote.port=8999 -Dcom.sun.management.jmxremote.rmi.port=8999
-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname="$(hostname)"
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/keycloak.hprof"
quay.io/keycloak/keycloak:17.0.0 start-dev
--log-level=INFO
Jmx is enabled but not getting connected from jconsole
You should take a look in the log-files. Are there error messages or can you see, wheter the ARGS reach the startup process?
I had some trouble to add the JMX Options on keycloak-16, the ser failed to start afterward. Then I found this article. This works for me.
If you run the docker image on a remote host, you can copy the jboss-cli-client.jar to your local maschine. And after each startof the container you have to add the user again.

How to set Postgres admin password during a Docker container build

We have a web app using postgresql DB, being deployed to tomcat on CentOS7 env. We are using docker (and docker-compose), and running on an Azure visual machine.
We cannot pre-set the admin user 'postgres' password (e.g. mysecret) during the docker/docker-compoase build process.
We have tried to use the environments: setting from the docker-compose.yml file, also the ENV in the ./portgres Dockerfile file. Neither works.
I had to manually use 'docker exec -it /bin/bash' to run the psql command to set the password. We like to avoid the manual step.
$ cat docker-compose.yml
version: '0.2'
services:
app-web:
build: ./tomcat
ports:
- "80:80"
links:
- app-db
app-db:
build: ./postgres
environment:
- "POSTGRES_PASSWORD=password"
- "PGPASSWORD=password"
expose:
- "5432"
$ cat postgres/Dockerfile
FROM centos:7
RUN yum -y install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm
RUN yum -y install postgresql96 postgresql96-server postgresql96-libs postgresql96-contrib postgresql96-devel
RUN yum -y install initscripts
USER postgres
ENV POSTGRES_PASSWORD=mysecret
ENV PGPASSWORD=mysecret
RUN /usr/pgsql-9.6/bin/initdb /var/lib/pgsql/9.6/data
RUN echo "listen_addresses = '*'" >> /var/lib/pgsql/9.6/data/postgresql.conf
RUN echo "PORT = 5432" >> /var/lib/pgsql/9.6/data/postgresql.conf
RUN echo "local all all trust" > /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all 127.0.0.1/32 trust" >> /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all ::1/128 ident" >> /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all 0.0.0.0/0 md5" >> /var/lib/pgsql/9.6/data/pg_hba.conf
EXPOSE 5432
ENTRYPOINT ["/usr/pgsql-9.6/bin/postgres","-D","/var/lib/pgsql/9.6/data","-p","5432"]
Web app deployment fails, db authentication error with wrong password (the password 'mysecret' is defined in the web app JPA persistence.xml). Assuming the password was not properly set (default initdb does not set the password)
Then, manually change the password using the above mentioned docker exec command, everything works.
Like to set the password during the docker build time. Based on Postgres/Docker documentation and some threads, either environments: from docker-compose setting or ENV from the docker file would work. Neither works for us.

Networking using docker-compose in docker executor in circleci

this is a circleci question I guess.
I am quite happy with circleci but now I ran into a problem and I don't know what I'm doing wrong.
Maybe this is something very easy, but I don't see the it.
In short
I can't make containers talk to each other on circleci.
Problem
Basically what I wanted to do is start a server container and a client container, and then let them talk to each other.
I created a minimal example here: https://github.com/mRcSchwering/circleci-integration-test
The README.md basically explains the desired outcome.
I have a .circleci/config.yml like this:
version: 2
jobs:
build:
docker:
- image: docker:18.03.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install docker-compose
command: |
apk --update add py2-pip
/usr/bin/pip2 install docker-compose
docker-compose --version
- run:
name: Start Container
command: |
docker-compose up -d
docker-compose ps
- run:
name: Let client talk to server
command: |
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
In a docker container, docker-compose is installed, which is then used to start a server and a client (postgres here). In the last step I am telling the client to query the server. However, it cannot find the server:
#!/bin/sh -eo pipefail
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
Starting project_server_1 ...
^#^#psql: could not connect to server: Connection refused
Is the server running on host "server" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
Exited with code 2
Files
The docker-compose.yml looks like this
version: '2'
services:
server:
image: postgres:9.5.12-alpine
networks:
- internal
expose:
- '5432'
client:
build:
context: .
networks:
- internal
depends_on:
- server
networks:
internal:
driver: bridge
where the client is built from a dockerfile like this
FROM alpine:3.7
RUN apk --no-cache add postgresql-client && rm -rf /var/cache/apk/*
Note
If I repeat everything on my Linux (also with docker-in-docker) it works.
But I guess some things work completely different on circleci.
I found some people mentioning that on circleci networking and bind mounts can be tricky but I didn't find anything that can help me.
There is this doc but I thought I am doing this already.
Then there is this project where someone seems to do the same thing on circleci successfully.
But I cannot figure out what's different there...
Anyway I would really appreciate your help. So far I have given up on this.
Best
Marc
Ok, in the meanwhile I (no actually it was halfer from the circleci forum) noticed that docker-compose run client psql -h server -p 5432 -U postgres -c "\l" was run before the server was up and running. A simple sleep 5 after docker-compose up -d fixes the problem.

Docker: Use sockets for communication between 2 containers

I have 2 Docker containers: App & Web.
App — simple container with php application code. It is used only for storage and deliver the code to the remote Docker host.
App image Dockerfile:
FROM debian:jessie
COPY . /var/www/app/
VOLUME ["/var/www/app"]
CMD ["true"]
Web — web service container, consist of PHP-FPM + Nginx.
Web image Dockerfile:
FROM nginx
# Remove default nginx configs.
RUN rm -f /etc/nginx/conf.d/*
# Install packages
RUN apt-get update && apt-get install -my \
supervisor \
curl \
wget \
php5-cli \
php5-curl \
php5-fpm \
php5-gd \
php5-memcached \
php5-mysql \
php5-mcrypt \
php5-sqlite \
php5-xdebug \
php-apc
# Ensure that PHP5 FPM is run as root.
RUN sed -i "s/user = www-data/user = root/" /etc/php5/fpm/pool.d/www.conf
RUN sed -i "s/group = www-data/group = root/" /etc/php5/fpm/pool.d/www.conf
# Pass all docker environment
RUN sed -i '/^;clear_env = no/s/^;//' /etc/php5/fpm/pool.d/www.conf
# Add configuration files
COPY config/nginx.conf /etc/nginx/
COPY config/default.vhost /etc/nginx/conf.d
COPY config/supervisord.conf /etc/supervisor/conf.d/
COPY config/php.ini /etc/php5/fpm/conf.d/40-custom.ini
VOLUME ["/var/www", "/var/log"]
EXPOSE 80 443 9000
ENTRYPOINT ["/usr/bin/supervisord"]
My question: Is it possible to link Web container and App container by the socket?
The main reason for this - using App container for deploy updated code to remote Docker host.
Using volumes/named volumes for share code between containers is not a good idea. But Sockets can help.
Thank you very much for your help and support!
If both containers run on the same host, it's possible to share a socket between the two as they are plain files.
You can create a local docker volume and mount that volume on both containers. Then configure you program(s) to use that path.
docker volume create --name=phpfpm
docker run phpfpm:/var/phpfpm web
docker run phpfpm:/var/phpfpm app
If the socket can be generated on the host you can mount the file into both containers. This is the method used to get a docker container to control the hosts docker.
docker run -v /var/container/some.sock:/var/run/some.sock web
docker run -v /var/container/some.sock:/var/run/some.sock app

Docker compose linking appears to not work

I'm using Docker Compose to run an Elixir/Phoenix app in development. The setup is pretty standard, with a postgres container and a web container.
However, I'm having a hard time getting the web container to talk to the database container.
Here is my web container Dockerfile:
FROM ubuntu:14.04
MAINTAINER me#example.com
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y curl
RUN apt-get install -y inotify-tools
RUN apt-get install -y postgresql-client
RUN wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb \
&& dpkg -i erlang-solutions_1.0_all.deb
RUN curl -sL https://deb.nodesource.com/setup_5.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN apt-get update
RUN apt-get install -y esl-erlang
RUN apt-get install -y elixir
RUN mix local.rebar
RUN mix local.hex --force
ADD . src/blog/
WORKDIR src/blog/
RUN mix deps.get
RUN mix deps.compile
Here's my docker-compose.yml:
db:
image: postgres
web:
build: .
command: mix phoenix.server
volumes:
- .:/src/blog
ports:
- "4000:4000"
links:
- db
When I run docker-compose up, things appear to work okay. However, when I try to run (to create the database):
$ docker run blogphoenix_web mix ecto.create
I get the following error:
** (Mix) The database for Blog.Repo couldn't be created, reason given: psql: could not translate host name "db" to address: Name or service not known
Then, if I inspect the hosts file of the web container with:
$ docker run blogphoenix_web cat /etc/hosts
... I get this output:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4 a86e4f02ea56
Isn't Docker Compose supposed to create a hostname entry for the db container?
Here are some relevant version numbers for my Docker tooling:
$ docker-machine --version
#=> docker-machine version 0.6.0, build e27fb87
$ docker-compose --version
#=> docker-compose version 1.6.0, build unknown
$ docker --version
#=> Docker version 1.10.0, build 590d510
EDIT
Okay, I just noticed something that may help someone else reading this. This command, docker run blogphoenix_web cat /etc/hosts enters a new container, whereas this command docker exec 845f9d69cb1e cat /etc/hosts enters a running container. 845f9d69cb1e is the container ID for the running version of the blogphoenix_web image.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
845f9d69cb1e blogphoenix_web "mix phoenix.server" About an hour ago Up 2 minutes 0.0.0.0:4000->4000/tcp blogphoenix_web_1
21a6f48dfc3b postgres "/docker-entrypoint.s" About an hour ago Up 2 minutes 5432/tcp blogphoenix_db_1
Running the exec command I get the expected output from the hosts file, showing the appropriate hostname linkage for the db container:
$ docker exec 845f9d69cb1e cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 db_1 21a6f48dfc3b blogphoenix_db_1
172.17.0.2 blogphoenix_db_1 21a6f48dfc3b
172.17.0.2 db 21a6f48dfc3b blogphoenix_db_1
172.17.0.3 845f9d69cb1e
In other words, when I ran the docker run blogphoenix_web mix ecto.create command, I was executing mix ecto.create in a new container based on the blogphoenix_web image. This new container wasn't started with docker-compose and thus did not have the appropriate host file linkage to the db container setup.
You need to run it using docker-compose:
docker-compose run web mix ecto.create
Docker compose creates linked containers, but the images themselves are not linked. This means that blogphoenix_web is not linked to blogphoenix_db, but when you will run
docker-compose up
the newly created containers "blogphoenix_web_1" and "blogphoenix_db_1" will be linked together.