I use redmine docker official image. I run redmine via docker-compose. My docker-compose.yml is :
version: '3.1'
services:
redmine:
image: redmine
restart: always
ports:
- 8080:3000
volumes:
- ./storage/docker_redmine-plugins:/usr/src/redmine/plugins
- ./storage/docker_redmine-themes:/usr/src/redmine/public/themes
- ./storage/docker_redmine-data:/usr/src/redmine/files
environment:
REDMINE_DB_MYSQL: db
REDMINE_DB_PASSWORD: example
REDMINE_SECRET_KEY_BASE: supersecretkey
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
MYSQL_DATABASE: redmine
This configuration runs successfully with the command docker-compose up. Redmine start on production environment. But how start Redmine in development environment. I saw available environment variables on official image Redmine page but not see environment for rails server. Is there a way to run Redmine in development environment by adding instruction to docker-compose.yml ?
Related
Developer context
On your project tech stack is based on Postgres, RabbitMQ and Redis. Your team regulary faces issue because of different versions of this tools. During last retrospective your team made decision resolve this problem with docker compose. There is approved list of environment variables:
DB_URL=postgresql://postgres:password#localhost:5432/nodejs-db
CACHE_URL=redis://localhost:6379
BUS_URL=amqp://user:pass#localhost:5672
Task:
Create docker compose file what will include LTS version with Postgres, RabbitMQ and Redis. All services should run at the same network and be exposed on default port. Use environment variables for configuring docker containers.
my code:
version: "3.1"
services:
postgres:
image: postgres:13.5
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
DB_URL: postgresql://postgres:password#localhost:5432/nodejs-db$
redis:
image: redis:6.0.16
environment:
CACHE_URL: redis://localhost:6379
hostname: redis
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: "rabbitmq"
environment:
BUS_URL: amqp://user:pass#localhost:5672
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: password
ports:
- 5672:5672
- 15672:15672
I am trying to make a docker compose file that include a sonarqube and a Postgre database, and deploy it to Azure App service.
Below is the docker-compose file :
version: '3.3'
services:
db:
image: postgres
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: always
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
sonarqube:
depends_on:
- db
image: sonarqube
ports:
- "9000:9000"
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
restart: always
environment:
SONARQUBE_JDBC_URL: jdbc:postgresql://db:5432/sonar
SONARQUBE_JDBC_USERNAME: sonar
SONARQUBE_JDBC_PASSWORD: sonar
volumes:
postgresql:
postgresql_data:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
in my local machine, everything is working as expected and I can access Sonarqube. However, once I try to apply the docker-compose file in Azure App service I got the following entries in the log :
I tried to check if I can increase vm.max_map_count in App service, but I didn't find a way to do so.
How can I resolve this issue ? and is there at least a way to bypass this bootstrap check of vm.max_map_count ?
It's not possible to increase vm.max_map_count in Azure App Service. You can bypass this bootstrap check by adding the following line in the environment variables section of the SonarQube service:
SONAR_ES_BOOTSTRAP_CHECKS_DISABLE: 'true'
New to Docker
Background: I have written a docker-compose.yml which when run with docker-compose up will build and run nicely on my box. Note: my docker-compose.yml downloads the Postgres image.
version: '3'
services:
api:
image: conference_api
container_name: conference_api
build:
context: .
ports:
- 5000:80
environment:
ASPNETCORE_ENVIRONMENT: Production
depends_on:
- postgres
postgres:
image: postgres:9.6.3
container_name: conference_db
environment:
POSTGRES_DB: conference
POSTGRES_USER: conf_app
POSTGRES_PASSWORD: docker
ports:
- 5432:5432
volumes:
- ./db:/docker-entrypoint-initdb.d
I then publish my docker image to docker hub.
On a fresh machine I use docker pull to pull my image and then I run it.
I get errors saying bascially "I can't find the database". The Postgres Image was not also downloaded.
My Question: When I pull my image, how can I get the Postgres image to also download as it is a dependency of my Image.
Use docker-compose pull --include-deps [SERVICE...].
Per the documentation:
--include-deps Also pull services declared as dependencies
This would require the users of your image to have your docker-compose.yml file.
Another option would be to use docker in docker, so docker-compose.yml would be inside your image where it will execute. However, this appears to be discouraged, even by the developer who made this feature possible.
I would like to be able to deploy my app in a pre-prod environment for integration testing using a Docker volume that will expose an instance of PostgreSQL. I'm using Scala v2.12.8 and Play v2.7.
Looking at the environment settings of the SBT native packager it seems possible to define dockerExposedVolumes in order to attach a DB.
Using a normal Docker compose file I would do something like that:
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgress
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- suruse
volumes:
pgdata:
This configuration has been taken from this SO answer.
I tried searching for config examples but I didn't find anything useful so far. Now I'm wondering how I should define a new docker volume and then expose it to the Docker image created by SBT exactly?
THE WORKING SOLUTION
The final version. I've fully tested it and it works exposing the DB on the TCP port 5433.
# https://docs.docker.com/samples/library/postgres/
version: "3"
services:
app-pgsql:
image: postgres:9.6
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=yourPasswordHere
- POSTGRES_DB=yourDatabaseNameHere
- POSTGRES_INITDB_ARGS="--encoding=UTF8"
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
driver: local
Launch the docker compose using sbt dockerComposeUp -useStaticPorts and then check if the containers have been actually exposed using docker ps -a. Also, check the log files using the command provided by dockerComposeUp or dockerComposeInstances.
There is a sbt Plugin that helps you to achieve this:
sbt-docker-compose
With that you can add your database to a docker compose file and you can run everything within sbt.
This is a Docker standard. Here is an explaination how to do it for Postgres:
[run_postgresql_docker_compose][2]
The docker-compose.yml from that example:
version: '3'
services:
mydb:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432/tc
volumes:
db-data:
driver: local
As this is a standard way of Docker you will find more examples.
I am playing around with Docker Desktop for Windows (just starting out) and have this simple docker-compose.yml which works great:
version: '2.1'
services:
db:
image: mysql:latest
container_name: wordpresslab_db
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_USER: wordpress
MYSQL_DATABASE: wordpress
MYSQL_PASSWORD: wordpress
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: wordpresslab_phpmyadmin
volumes:
- /sessions
ports:
- "8090:80"
depends_on:
- db
wordpress:
image: wordpress:latest
container_name: wordpresslab_wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
depends_on:
- db
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:
Once I run docker-compose up -d it creates the containers for database, phpmyadmin and wordpress website and are accessible and working OK.
My question is, how could I setup "project.dev" instead of a "localhost:8080" to access wordpress site and "phpmyadmin.dev" instead of a "localhost:8090" to access phpmyadmin? What other tools do I need? Note that I am using Windows 10 as host.
I think you want to use port mapping as described in the networking doc.
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-networking#network-creation
There's also a Docker doc on ports in compose files.
https://docs.docker.com/compose/compose-file/#long-syntax
Please note that there are differences in syntax depending on which version of docker compose you are using. You can check your version by running this command in a command prompt:
docker-compose --version
Let me know if you're still running into trouble!