Why database is not create when docker-compose up? - postgresql

i try run PostgreSQL in docker-compose. I can`t create my custom database. For example:
Dockerfile
FROM postgres:14.3
#pg_amqp
WORKDIR /code/pg_amqp-master
COPY ./conf/postgres/pg_amqp-master .
RUN apt update && apt install && apt install make
RUN apt install postgresql-server-dev-14 -y
RUN make && make install
Docker-compose.yml
version: '3.8'
services:
db:
container_name: postgres
build:
context: .
dockerfile: conf/postgres/Dockerfile
volumes:
- ./conf/postgres/scripts:/docker-entrypoint-initdb.d
- ./conf/postgres/postgresql.conf:/etc/postgresql/postgresql.conf
- postgres_data:/var/lib/postgresql/data/
command: postgres -c config_file=/etc/postgresql/postgresql.conf
environment:
- POSTGRES_DB=${DB_NAME}
- PGUSER=${POSTGRES_USER}
- PGPASSWORD=${POSTGRES_PASSWORD}
ports:
- '5432:5432'
env_file:
- ./.env
/scripts/create_extension.sql
create extension if not exists pg_stat_statements;
create extension if not exists amqp;
.env
DB_NAME=mydb
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mypass
When i run docker-compose up -d --build creating is done, but i have one default database - postgres. And all 'create extension' is done in default database - postgres. Where is my mistake?

The Postgres environment variables and initialization scripts are only used if Postgres doesn't find an existing database on startup.
Delete your existing database by deleting the postgres_data volume and then start the Postgres container. Then it'll see an empty volume and will create a database for you using your variables and your scripts.

Related

What is the path for application.properties (or similar file) in docker container?

I am dockerizing springboot application(with PostgreSQL). I want to overwrite application.properties in docker container with my own application.properties.
My docker-compose.yml file looks like this:
version: '2'
services:
API:
image: 'api-docker.jar'
ports:
- "8080:8080"
depends_on:
- PostgreSQL
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://PostgreSQL:5432/postgres
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
PostgreSQL:
image: postgres
volumes:
- C:/path/to/my/application.properties:/path/of/application.properties/in/container
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
I am doing this to overwrite the application.properties in container with my application.properties file so that the data gets stored in localhost
I tried the path /opt/application.properties but it didn't work.
You have two solutions:
1) First solution
Create application.properties with env variable
mycustomproperties1: ${MY_CUSTOM_ENV1}
mycustomproperties2: ${MY_CUSTOM_ENV2}
I advise you to create different application.properties (application-test,application-prod, etc...)
2) Another solution
Create docker file:
FROM debian:buster
RUN apt-get update --fix-missing && apt-get dist-upgrade -y
RUN apt install wget -y
RUN apt install apt-transport-https ca-certificates wget dirmngr gnupg software-properties-common -y
RUN wget -qO - https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public | apt-key add -
RUN add-apt-repository --yes https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
RUN apt update
RUN apt install adoptopenjdk-8-hotspot -y
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","-Dspring.config.location="file:///config/application.properties","/app.jar"]
or add env variable in docker compose
SPRING_CONFIG_LOCATION=file:///config/application.properties
modify docker-compose:
version: '2'
services:
API:
image: 'api-docker.jar'
ports:
- "8080:8080"
depends_on:
- PostgreSQL
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://PostgreSQL:5432/postgres
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
- SPRING_CONFIG_LOCATION=file:///config/application.properties
volumes:
- C:/path/to/my/application.properties:/config/application.properties
In case anybody comes across the same problem, here is the solution.
I am trying to use my localhost database instead of in-memory database(storing it in container). This is my docker-compose.yml configuration
version: '2'
services:
API:
image: 'api-docker.jar' #(your jar file name)
volumes:
- path/to/new/application.properties:/config/env
ports:
- "8080:8080"
You need to provide a new application.properties file which contains the configuration for storing the data into your local database(could be the copy of your actual application.properties). This file needs to be overwritten in the config file of the container and the path to that is /config/env (which is mentioned in the yml file)

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Postgres Initialize script not working Docker version 3.4

Trying to dockerize an application and in my application, i have the following
docker-compose.yml
version: '3.4'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- database
ports:
- "3000:3000"
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
env_file: .env
environment:
RAILS_ENV: development
database:
image: postgres:10.12
volumes:
- ./init.sql/:/docker-entrypoint-initdb.d/init.sql
- db_data:/var/lib/postgresql/data
volumes:
gem_cache:
db_data:
In my init.sql file
CREATE USER user1 WITH PASSWORD 'password';
ALTER USER user1 WITH SUPERUSER;
i have already run chmod +x init.sql
In my .env file i have the following
DATABASE_NAME=tools_development
DATABASE_USER=user1
DATABASE_PASSWORD=password
DATABASE_HOST=database
And this is my Dockerfile
FROM ruby:2.7.0
ENV BUNDLER_VERSION=2.1.4
RUN apt-get -y update --fix-missing
RUN apt-get install -y bash git build-essential nodejs libxml2-dev openssh-server libssl-dev libreadline-dev zlib1g-dev postgresql-client libcurl4-openssl-dev libxml2-dev libpq-dev tzdata
RUN gem install bundler -v 2.1.4
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle check || bundle install
COPY . ./
ENTRYPOINT ["./entrypoint.sh"]
But each time I run docker-compose run --build and try to run my application. I get error:
could not translate host name "database" to address: Name or service not known
I have tried everything possible but still the same error.
Does anyone have any idea on how to fix this issue?
I know the issue is happening because the postgres initialize scripts are not running. I have seen a lot of options online and i have tried everything but I am still facing the same error.
Any help is appreciated
Thanks
From the postgres docker image documentation you can see that the POSTGRES_USER and POSTGRES_PASSWORD are the necessary environment variables to setup the postgres container
You could add these environment variables to your .env file. So the file will be as follow:
.env
DATABASE_NAME=tools_development
DATABASE_USER=user1
DATABASE_PASSWORD=password
DATABASE_HOST=database
POSTGRES_USER=user1
POSTGRES_PASSWORD=password
POSTGRES_DB=tools_development
These environment variables will be used from the postgres container to init the DB and assign the user, so you can get rid of the init.sql file
After that you need to add the reference of the .env file in the database(postgres:10.12) service.
So your docker compose file should be as follow:
docker-compose.yml
...
database:
image: postgres:10.12
volumes:
- db_data:/var/lib/postgresql/data
env_file: .env
...

How to dockerize my dotnet core + postgresql app?

I have a dotnet core application created with Angular template that communicates with a postgresql database.
On my local machine, I run the following command on my terminal to run the database container:
docker run -p 5432:5432 --name accman-postgresql -e POSTGRES_PASSWORD=mypass -d -v 'accman-postgresql-volume:/var/lib/postgresql/data' postgres:10.4
And then by pressing F5 in VsCode, I see that my application works great.
To dockerise my application, I added this file to the root of my application.
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
# install nodejs for angular, webpack middleware
RUN apt-get update
RUN apt-get -f install
RUN apt-get install -y wget
RUN wget -qO- https://deb.nodesource.com/setup_11.x | bash -
RUN apt-get install -y build-essential nodejs
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "Web.dll"]
Now I think I have to create a docker-compose file. Would you please help me on creating my docker-compose.yml file?
Thanks,
I figured it out, here is my final version of docker-compose.yml file:
version: '3'
services:
web:
container_name: 'accman-web-app'
image: 'accman-web'
build:
context: .
dockerfile: Dockerfile
ports:
- '8090:80'
depends_on:
- 'postgres'
networks:
- accman-network
postgres:
ports:
- '5432:5432'
container_name: accman-postgresql
environment:
- POSTGRES_PASSWORD=mypass
volumes:
- 'accman-postgresql-volume:/var/lib/postgresql/data'
image: 'postgres:10.4'
networks:
- accman-network
volumes:
accman-postgresql-volume:
networks:
accman-network:
driver: bridge
You can use composerize to find out how you can add services to your docker-compose file.
Now you can run these following commands consecutively:
docker-compose build
docker-compose up
And voila!

Knex Migration with Docker Compose Psql

I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.