Docker-Compose cannot find config env file - postgresql

I've created image from my Go project that has config env file. Here's my script for docker file
FROM alpine AS base
RUN apk add --no-cache curl wget
FROM golang:1.15 AS go-builder
WORKDIR /go/app
COPY . /go/app
RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /go/app/main ./main.go
FROM base
COPY --from=go-builder /go/app/main /main
CMD ["/main"]
Also i create docker-compose file to connect with postgresql like this:
version: "3.7"
services:
postgres:
container_name: postgres
image: postgres
ports:
- 5432:5432
networks:
- go_network
app-golang:
container_name: app1
image: app-go:1.0
ports:
- 8000:8000
depends_on:
- postgres
networks:
- go_network
env_file:
- /env/config
networks:
go_network:
name: go_network
My env file is in config file format, and as seen above, i store it on /env/config. The problem is when i do docker-compose up -d the log is said cannot find /env/config like this ERROR: Couldn't find env file: /env/config Is it has different function to read from config format file?
EDIT 1:
my env file is in config format as shown below, and I'm using "github.com/kenshaw/envcfg" package to read config file using envcfg:

Related

Can't reach database server at `postgres`:`5432`

Trying to dockerize, nests, and Prisma.
Nest is responding correctly to curl requests and and I can connect to the Postgres server fine with this command
--- docker compose exec postgres psql -h localhost -U postgres -d webapp_dev
Everything works until i try to run
npx prisma migrate dev --name init
then i get back
Error: P1001: Can't reach database server at `postgres`:`5432`
Here is my code:
docker-compose.yml
version: "2"
services:
backend:
build: .
ports:
- 3000:3000
- 9229:9229 # debugger port
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: yarn start:debug
restart: unless-stopped
depends_on:
- postgres
environment:
DATABASE_URL: postgres://postgres#postgres/webapp_dev
PORT: 8000
postgres:
image: postgres:14-alpine
ports:
- 5432:5432
environment:
POSTGRES_DB: webapp_dev
POSTGRES_HOST_AUTH_METHOD: trust
DockerFile
FROM node:16
# Create app directory, this is in our container
WORKDIR /usr/src/app
# Install app dependencies
# Need to copy both package and lock to work
COPY package.json yarn.lock ./
RUN yarn install
COPY prisma/schema.prisma ./prisma/
RUN npx prisma generate
# Bundle app source
COPY . .
RUN yarn build
EXPOSE 8080
CMD ["node": "dist/main"]
.env
//.env
DATABASE_URL=postgres://postgres#postgres/webapp_dev
not sure if this is the only issue but your db url does not contain the db secret in it
DATABASE_URL: postgres://postgres:mysecret#postgres/webapp_dev?schema=public
I got the same error I solved it after adding ?connect_timeout=300 at my DATABASE_URL

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Dockerfile can't find my file exit code 1 Dockerfile can't find a file in the same dir

This is my first time with docker, I'm working on this problem for two days, it would make me very happy to find a solution.
I'm running docker-compose.yml file with "docker-compose up":
version: '3.3'
services:
base:
networks:
- brain_storm-network
volumes:
- brain_storm-storage:/usr/src/brain_storm
build: "./brain_storm"
data_base:
image: mongo
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- '27017:27017'
api:
build: "./brain_storm/api"
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- 5000:5000
depends_on:
- data_base
- base
restart: on-failure
the base Dockerfile inside ./brain_storm does the following:
FROM brain_storm-base:latest
RUN mkdir -p /usr/src/brain_storm/brain_storm
ADD . /usr/src/brain_storm/brain_storm
and when running the Dockerfile inside brain_storm/api
FROM brain_storm-base:latest
CMD cd /usr/src/brain_storm \
&& python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
I'm getting this error :
brain_storm_api_1 exited with code 1
api_1 | /usr/local/bin/python: Error while finding module specification for 'brain_storm.api' (ModuleNotFoundError: No module named 'brain_storm')
pwd says I'm in '/' and not in the current directory when running the base Dockerfile,
so that might be the problem but how do I solve it without going to /home/user/brain_storm in the Dockerfile, because I want to keep the location of brain_storm folder general.
How can I make Dockerfile see and take the file from the current directory (where the Dockerfile file is) ?
You should probably define WORKDIR command in both your Dockerfiles. The WORKDIR command is used to define the working directory of a Docker container at any given time. Any RUN, CMD, ADD,COPY, or ENTRYPOINT command will be executed in the specified working directory.:
base:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
COPY . .
api:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
CMD python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017

Starting Tryton server with docker-compose file

I am trying to link an external postgres to tryton/tryton from docker hub.
docker-compose.yaml
version: '3.7'
services:
tryton-postgres:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_DB=tryton
restart: always
gnuserver:
image: tryton/tryton:4.6
links:
- tryton-postgres:postgres
ports:
- 8000:8000
depends_on:
- tryton-postgres
entrypoint: /entrypoint.sh trytond
when i ssh into the container and run trytond-admin --all -d tryton it seems to be looking for sqlite file instead of the connected postgres database. Are there some env variagbles i must set? What am i missing in my docker compose file?
Instead of changing the configuration file, with Docker it is simpler to set environment variable like:
DB_USER=
DB_PASSWORD=
DB_HOSTNAME=tryton-postgres
DB_PORT=5432
you need to edit /etc/tryton/trytond.conf to look at postgresql:
uri = postgresql://USERNAME:PASSWORD#tryton-postgres:5432/
see the Docs

Docker postgres does not run init file in docker-entrypoint-initdb.d

Based on Docker's Postgres documentation, I can create any *.sql file inside /docker-entrypoint-initdb.d and have it automatically run.
I have init.sql that contains CREATE DATABASE ronda;
In my docker-compose.yaml, I have
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn ronda.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
data:
restart: always
build: ./postgres/
volumes:
- /var/lib/postgresql
command: "true"
and my postgres Dockerfile,
FROM library/postgres
RUN mkdir -p /docker-entrypoint-initdb.d
COPY init.sql /docker-entrypoint-initdb.d/
Running docker-compose build and docker-compose up work fine, but the database ronda is not created.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
If your initialisation requirements are just to create the ronda schema, then you could just make use of the POSTGRES_DB environment variable as described in the documentation.
The bit of your docker-compose.yml file for the postgres service would then be:
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
environment:
POSTGRES_DB: ronda
On a side note, do not use restart: always for your data container as this container does not run any service (just the true command). Doing this you are basically telling Docker to run the true command in an infinite loop.
Had the same problem with postgres 11.
Some points that helped me:
run:
docker-compose rm
docker-compose build
docker-compose up
The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.