How to attach a PostgreSQL volume to a Docker image generated with SBT native packager? - scala

I would like to be able to deploy my app in a pre-prod environment for integration testing using a Docker volume that will expose an instance of PostgreSQL. I'm using Scala v2.12.8 and Play v2.7.
Looking at the environment settings of the SBT native packager it seems possible to define dockerExposedVolumes in order to attach a DB.
Using a normal Docker compose file I would do something like that:
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgress
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- suruse
volumes:
pgdata:
This configuration has been taken from this SO answer.
I tried searching for config examples but I didn't find anything useful so far. Now I'm wondering how I should define a new docker volume and then expose it to the Docker image created by SBT exactly?
THE WORKING SOLUTION
The final version. I've fully tested it and it works exposing the DB on the TCP port 5433.
# https://docs.docker.com/samples/library/postgres/
version: "3"
services:
app-pgsql:
image: postgres:9.6
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=yourPasswordHere
- POSTGRES_DB=yourDatabaseNameHere
- POSTGRES_INITDB_ARGS="--encoding=UTF8"
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
driver: local
Launch the docker compose using sbt dockerComposeUp -useStaticPorts and then check if the containers have been actually exposed using docker ps -a. Also, check the log files using the command provided by dockerComposeUp or dockerComposeInstances.

There is a sbt Plugin that helps you to achieve this:
sbt-docker-compose
With that you can add your database to a docker compose file and you can run everything within sbt.
This is a Docker standard. Here is an explaination how to do it for Postgres:
[run_postgresql_docker_compose][2]
The docker-compose.yml from that example:
version: '3'
services:
mydb:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432/tc
volumes:
db-data:
driver: local
As this is a standard way of Docker you will find more examples.

Related

DOCKER_Cannot run multiple services by docker-compose

I'm set up docker compose for my project with 2 services: spring-boot and postgresql. I created Dockerfile and docker-compose,yml as below:
Dockerfile :
FROM openjdk:8-jdk-alpine
MAINTAINER linhan.com
COPY target/LinhAn-0.0.1-SNAPSHOT.jar linhan-server-1.0.0.jar
ENTRYPOINT ["java","-jar","/linhan-server-1.0.0.jar"]
docker-compose.yml:
version: '2'
services:
spring_boot:
image: 'linhan'
build: .
container_name: api
ports:
- "8080:8080"
depends_on:
- postgres
environment:
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://localhost:5432/test_db
- SPRING_DATASOURCE_USERNAME=user
- SPRING_DATASOURCE_PASSWORD=123456
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
postgres:
image: 'postgres:13.1-alpine'
container_name: db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Then, when I type docker-compose up in terminal, postgres ran only, spring boot still not.
I searched google for solution but seems no hope. Please help me, thanks a lot!!!!!
I think you need to change the SPRING_DATASOURCE_URL to reference your service name instead of localhost. The service name is resolved automatically to your service since all services are part of the default_network by default in docker-compose.
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://postgres:5432/test_db
Also, for clarity I would suggest you add the port to your docker-compose postgres service, so it is clear which port is being used, even if it is the default:
postgres:
image: 'postgres:13.1-alpine'
container_name: db
ports:
- "5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Also, another suggestion would be to try and use a healthcheck to see if your database service becomes available instead of a simple depends_on. The short version will mark the dependency fulfilled as soon as the container is Running, regardless of the availability of the database.
Either that, or you can add application logic to retry database connection in case of failure.

Docker-compose postgresql integration

I'm new to docker and am trying to make a composed image consisting of services, nginx and postgresql database. I'm following the tutorial here : http://www.patricksoftwareblog.com/how-to-use-docker-and-docker-compose-to-create-a-flask-application/
And have been successful up to adding postgresql where I'm having difficulties and questions.
My docker-compose.yml:
version : '2'
services:
web:
restart: always
build: ./home/admin/
expose:
- "8000"
nginx:
restart: always
build: ./etc/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:9.6
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./var/lib/postgresql
volumes_from:
- data
ports:
- "5432:5432"
I have included his docker generator script under /var/lib/postgresql but keep facing ERROR: Dockerfile parse error line 1: unknown instruction: IMPORT when I run 'docker-compose build'.
If I leave in the 'data' section & remove the postgres section in my docker-compose.yml file, my containers seemingly run fine but I'm unsure if postgresql is properly running at all. I'm able to GET using curl but still - I'm unsure how to go about confirming postgres specifics to confirm a proper environment and would appreciate examples on this topic in particular.
I was also wondering if running my docker-compose containers then simply running a separate postgresql container could also function if provided the correct ports.
Thank you!
Check the content of your docker-compose.yml:
yaml format (see for instance codebeautify.org/yaml-validator)
eol or encoding issue
multi-line instructions

How to access postgres-docker container other docker container without ip address

How to access postgres-docker container other docker container without ip address?
I want to store data in postgres by using myweb. in jar given host like localhost:5432/db..
Here my compose file:
version: "3"
services:
myweb:
build: ./myweb
container_name: app
ports:
- "8080:8080"
- "9090:9090"
networks:
- front-tier
- back-tier
depends_on:
- "postgresdb"
postgresdb:
build: ./mydb
image: ppk:postgres9.5
volumes:
- dbdata:/var/lib/postgresql
ports:
- "5432:5432"
networks:
- back-tier
volumes:
dbdata: {}
networks:
front-tier:
back-tier:
Instead of localhost:5432/db.. use postgresdb:5432/db.. connection string.
By default the container has the same hostname as the service name.
Here is my minimal working example, which is connecting a java client (boxfuse/flyway) with postgres server. The most important part is the heath check, which is delaying the start of the myweb container to the time when postgres is ready to accept connections.
Note that this can be directly executed by docker-compose up, it dosen't have any other dependencies. Both the images are from docker hub.
version: '2.1'
services:
myweb:
image: boxfuse/flyway
command: -url=jdbc:postgresql://postgresdb/postgres -user=postgres -password=123 info
depends_on:
postgresdb:
condition: service_healthy
postgresdb:
image: postgres
environment:
- POSTGRES_PASSWORD=123
healthcheck:
test: "pg_isready -q -U postgres"
That is the Docker Networking problem. The solution is to use postgresdb:5432/db in place of localhost:5432/db because the two service is in the same network named back-tier and docker deamon will use name service like a DNS name to make communication between the two container. I think that my solution will help you so.

Using custom hostnames for docker local development containers

I am playing around with Docker Desktop for Windows (just starting out) and have this simple docker-compose.yml which works great:
version: '2.1'
services:
db:
image: mysql:latest
container_name: wordpresslab_db
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_USER: wordpress
MYSQL_DATABASE: wordpress
MYSQL_PASSWORD: wordpress
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: wordpresslab_phpmyadmin
volumes:
- /sessions
ports:
- "8090:80"
depends_on:
- db
wordpress:
image: wordpress:latest
container_name: wordpresslab_wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
depends_on:
- db
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:
Once I run docker-compose up -d it creates the containers for database, phpmyadmin and wordpress website and are accessible and working OK.
My question is, how could I setup "project.dev" instead of a "localhost:8080" to access wordpress site and "phpmyadmin.dev" instead of a "localhost:8090" to access phpmyadmin? What other tools do I need? Note that I am using Windows 10 as host.
I think you want to use port mapping as described in the networking doc.
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-networking#network-creation
There's also a Docker doc on ports in compose files.
https://docs.docker.com/compose/compose-file/#long-syntax
Please note that there are differences in syntax depending on which version of docker compose you are using. You can check your version by running this command in a command prompt:
docker-compose --version
Let me know if you're still running into trouble!

two docker-compose .yml in the same network with COMPOSE_PROJECT_NAME

I am trying to have my own network name for my docker-compose files (server.yml and test.yml), as test.yml gets only started from time to time, but needs access to some services in the server.yml. I can make it work with docker-compose -p nameofproject up, but not with COMPOSE_PROJECT_NAME.
server.yml
version: '2'
networks:
mynetwork:
driver: bridge
services:
app1:
networks:
- mynetwork
environment:
POSTGRES_PASSWORD: somepassword
COMPOSE_PROJECT_NAME: serverstack
app2:
networks:
- mynetwork
environment:
COMPOSE_PROJECT_NAME: serverstack
depends_on:
- app1
My expectation is that when the container is starting I should see
Creating serverstackmynetwork_app_1
Creating serverstackmynetwork_app_2
the network should be named (docker network ls)
serverstack_mynetwork
just like when I do the following, which actually works
docker-compose -p serverstack up
And then I can connect just by using docker-compose up with the second file (which works just fine when using the -p option on the server.yml)
testing.yml
version: '2'
networks:
testapp_network:
external:
name: serverstack_mynetwork
services:
testapp:
networks:
- testapp_network
But using it without -p serverstack on the server.yml I see directories as names
Creating directoryofapp1_app1_1
Creating directoryofapp2_app2_1
so COMPOSE_PROJECT_NAME is being omitted and I also cannot connect to the server service though serverstack_mynetwork
I did add the COMPOSE_PROJECT_NAME: serverstack after building the image, but I would expect it should work anyhow. What am I missing?
I solved this by creating the ".env" file containing
COMPOSE_PROJECT_NAME=myprojectname