Can't connect with docker-compose to Postgres database - postgresql

I'm trying to build a docker-compose file that will spin up my EF Core web api project, connecting to my Postgres database.
I'm having a hard time getting the EF project connecting to the database.
This is what I currently have for my docker-compose.yml:
version: '3.8'
services:
web:
container_name: 'mybackendcontainer'
image: 'myuser/mybackend:0.0.6'
build:
context: .
dockerfile: backend.dockerfile
ports:
- 8080:80
depends_on:
- postgres
networks:
- mybackend-network
postgres:
container_name: 'postgres'
image: 'postgres:latest'
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=MySuperSecurePassword!
- POSTGRES_DB=MyDatabase
networks:
- mybackend-network
expose:
- 5432
volumes:
- ./db-data/:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- mybackend-network
volumes:
- ./pgadmin-data/:/var/lib/pgadmin/
networks:
mybackend-network:
driver: bridge
And my web project docker file looks like this:
# Get base DSK Image from Microsoft
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy the CSPROJ file and restore any dependencies (via NUGET)
COPY *.csproj ./
RUN dotnet restore
# Copy the project files and build our release
COPY . ./
RUN dotnet publish -c Release -o out
# Generate runtime image - do not include the whole SDK to save image space
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyBackend.dll"]
And my connection string looks like this:
User ID =bootcampdb;Password=MySuperSecurePassword!;Server=postgres;Port=5432;Database=MyDatabase; Integrated Security=true;Pooling=true;
Currently I have two problems:
I'm getting Npgsql.PostgresException (0x80004005): 57P03: the database system is starting up when I do docker-compose -up. I tried to add the healthcheck to my postgress db but that did not work. When I go to my Docker desktop app, and start my backend again, that message goes away and I get my second problem...
Secondly after the DB started it's saying: FATAL: password authentication failed for user "username". It looks like it's not creating my user for the database. I even changed not to use .env files but have the value in my docker-compose file, but its still not working. I've tried to do docker-compose down -v to ensure my volumes gets deleted.
Sorry these might be silly questions, I'm still new to containerization and trying to get this to work.
Any help will be appreciated!

Problem 1: Having depends_on only means that docker-compose will wait until your postgres container is started before it starts the web container. The postgres container needs some time to get ready to accept connections and if you attempt to connect before it's ready, you get the error you're seeing. You need to code your backend in a way that it'll wait until Postgres is ready by retrying the connection with a delay.
Problem 2: Postgres only creates the user and database if no database already exists. You probably have an existing database in ./db-data/ on the host. Try deleting ./db-data/ and Postgres should create the user and database using the environment variables you've set.

Related

Postgres running via docker not persisting data after initialization script

I'm using docker for the first time to set up a test database that my team can then use. I'm having some trouble getting my data on DBeaver after running my docker-compose file. The issue I'm facing is that my database does not show up in DBeaver (along with relevant Schemas and Tables that I also create/populate in my initialization sql script).
Here is my docker-compose.yml
version: "3"
services:
test_database:
image: postgres:latest
build:
context: ./
dockerfile: Dockerfile
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=test1234
- POSTGRES_DB=testdb
container_name: test_database
In this, I specify the docker file I want it to use for building. Here is the dockerfile:
# syntax = docker/dockerfile:1.3
FROM postgres:latest
ADD test_data.tar .
COPY init_test_db.sql /docker-entrypoint-initdb.d/
Now, when I run docker-compose build and docker-compose up, I can see through the logs that my SQL commands (CREATE, COPY, etc.) do get executed and the rows do get added. But when I connect to this instance through DBeaver, I can't see this at all. In fact, the only database on there is the default Postgres database, even through the logs say I'm connected to test_database.
I followed some other solutions and used docker volume prune as well, but that didn't affect anything (I read some solutions about clearing up volumes, and at that point, I had volumes: /tmp:/tmp as well). Any ideas?
Wow, this wasn't an error after all. All I had to do was go on the connection settings on DBeaver and check 'Show all databases' under the Postgres tab. Hope this can help someone :)

Running a MongoDB in Docker using Compose

I'm trying to run a database in docker and a python script with it to store MQTT messages. This gave me the idea to use Docker Compose since it sounded logical that both were somewhat connected. The issue I'm having is that the Docker Containers do indeed run, but they do not store anything in the database.
When I run my script locally it does store messages so my hunch is that the Compose File is not correct.
Is this the correct way to compose a python file which stores message in a DB and the database itself (with a .js file for the credentials). Any feedback would be appreciated!
version: '3'
services:
storing_script:
build:
context: .
dockerfile: Dockerfile
depends_on: [mongo]
mongo:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: xx
MONGO_INITDB_ROOT_PASSWORD: xx
MONGO_INITDB_DATABASE: motionDB
volumes:
- ${PWD}/mongo-data:/data/db
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
ports:
- 27018:27018
restart: unless-stopped
The DockerFile im using to build:
# set base image (host OS)
FROM python:3.8-slim
# set the working directory in the container
WORKDIR /code
# copy the dependencies file to the working
directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to
the working directory
COPY src/ .
# command to run on container start
CMD [ "python", "./main.py"]
I think this may be due to user permission.
What I did for my docker-compose for docker deployment, is I also mount the passwd file after creating a mongodb user
volumes:
/etc/passwd:/etc/passwd:ro
This worked for me as the most straight forward solution.

Docker compose: unable to connect to Postgres [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I have Go as my API and PostgreSQL as my database.
I can run my backend using the docker container when executed in development environment. However, when I run my dockerfile and docker-compose. The database is not connecting to postgres.
Dockerfile
FROM golang:alpine
RUN mkdir /backend
ADD . /backend/
WORKDIR /backend
COPY go.mod .
COPY go.sum .
COPY .env .
RUN go mod download
RUN go build -o main .
EXPOSE 3002
CMD ["./main"]
docker-compose.yml
version: '3.3'
services:
postgres:
image: "postgres"
ports:
- "5432:5432"
env_file:
- .env
volumes:
- ./postgres/:/var/lib/postgresql/data/
restart: always
networks:
- "backend.network"
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- backend
- postgres
ports:
- "8080:8080"
links:
- 'postgres'
- 'backend'
networks:
- "backend.network"
backend:
build: "."
ports:
- "3002:3002"
depends_on:
- postgres
links:
- postgres
restart: "always"
networks:
- "backend.network"
networks:
backend.network:
.env
DB_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=pass
POSTGRES_NAME=db
Go connection
DB, err = gorm.Open("postgres", fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable", config.Config("DB_HOST"), port, config.Config("POSTGRES_USER"), config.Config("POSTGRES_PASSWORD"), config.Config("POSTGRES_NAME")))
if err != nil {
panic("failed to connect database")
}
Error
failed to connect databasedial tcp 127.0.0.1:5432: connect: connection refused
exit status 1
I really don't know what's wrong with my dockerfile.
As #blami pointed out, the DB_HOST should be "postgres". Also, you are passing the .env file to the postgres image but not to the backend one (it may be using default values). If it keeps failing please comment the error you get.
Finally I would recommend to use a builder image to compile your server and then copy it to a smaller image using alpine (if you want to go a step further you can build the final image from scratch), making the container size as lightweight as we can, just like this:
FROM golang:1.15-alpine as builder
COPY . /backend
WORKDIR /backend
RUN go build -o main -ldflags="-s -w" . # If you are building the image below from scratch specify the CGO_ENABLED=0 env var
-----
FROM alpine:3.12.1
COPY --from=builder /backend/main /usr/bin/
EXPOSE 3002
ENTRYPOINT ["/usr/bin/main"]
Dockerfile notes based on best practices article:
Try to specify images version whenever possible as the lastest one may introduce new bugs to your app. However, it's not strictly required.
Prefer COPY to ADD when the resource we are trying to copy is not an url.
If the purpose of the container is purely running a server it's recommended to use ENTRYPOINT instead of CMD, which will ignore any parameters passed when running the container (if you want to use "exec" command then CMD is preferred).
Go binary note: -ldflags="-s -w" is often used to reduce the binary size by stripping the debugging information (more info here).
as #blami said:
your backend is trying to connect to "localhost" as each of your containers will have its own "localhost".
youc can use your machine host ip, or your database container ip. you can find it using docker inspect <container_name>

How to make sure docker-compose will not remove my volume with postgres data

I am running a simple django webapp with docker-compose. I define both a web service and a db service in a docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
env_file:
- ./.env.dev
depends_on:
- db
volumes:
postgres_data:
I start the service by running:
docker-compose up -d
I can load some data in there with a custom django command that I wrote for my app. Everything is running fine (with data) on localhost:8000.
However, when I run
docker-compose down
(so without -v) and then again
docker-compose up -d
the database is empty again. The volume was not persisted. From what I read in the docker-compose docs and also in several posts here at SO, persisting the volume and reusing it when you start a new container should be the default behavior (which, if I understand it correctly, you can disable by using the --renew-anon-volumes flag).
However in my case, the volume is not persisted. Or maybe it is, but my data is gone.
By doing docker volume ls I can see that my volume (I'll use the name my_volume here) still exists after the docker-compose down command. However, the CreatedAt value has been changed. This makes me think it's a different volume with the same name, and my data is already gone, but I don't know how to confirm that.
This SO answer suggests to mount the volume on /var/lib/postgresql instead of /var/lib/postgresql/data. However, I've seen other resources (like this one) where the opposite is suggested. I've tried both, but neither option works.
Thanks for any advice.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.

Using docker-compose to create tables in postgresql database

I am using docker-compose to deploy a multicontainer python Flask web application. I'm having difficulty understanding how to create tables in the postgresql database during the build so I don't have to add them manually with psql.
My docker-compose.yml file is:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/flask-app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
I dont want to have to enter psql in order to type in:
CREATE DATABASE my_database;
CREATE USER this_user WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE "my_database" to this_user;
\i create_tables.sql
I would appreciate guidance on how to create the tables.
It didn't work for me with the COPY approach in Dockerfile. But I managed to run my init.sql file by adding the following to docker-compose.yml:
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
init.sql was in the same directory as my docker-compose.yml.
I picked the solution from this gist. Check this article for more information.
I dont want to have to enter psql in order to type in
You can simply use container's built-in init mechanism:
COPY init.sql /docker-entrypoint-initdb.d/10-init.sql
This makes sure that your sql is executed after DB server is properly booted up.
Take a look at their entrypoint script. It does some preparations to start psql correctly and looks into /docker-entrypoint-initdb.d/ directory for files ending in .sh, .sql and .sql.gz.
10- in filename is because files are processed in ASCII order. You can name your other init files like 20-create-tables.sql and 30-seed-tables.sql.gz for example and be sure that they are processed in order you need.
Also note that invoking command does not specify the database. Keep that in mind if you are, say, migrating to docker-compose and your existing .sql files don't specify DB either.
Your files will be processed at container's first start instead of build stage though. Since Docker Compose stops images and then resumes them, there's almost no difference, but if it's crucial for you to init the DB at build stage I suggest still using built-in init method by calling /docker-entrypoint.sh from your dockerfile and then cleaning up at /docker-entrypoint-initdb.d/ directory.
I would create the tables as part of the build process. Create a new Dockerfile in a new directory ./database/
FROM postgres:latest
COPY . /fixtures
WORKDIR /fixtures
RUN /fixtures/setup.sh
./database/setup.sh would look something like this:
#!/bin/bash
set -e
/etc/init.d/postgresql start
psql -f create_fixtures.sql
/etc/init.d/postgresql stop
Put your create user, create database, create table sql (and any other fixture data) into a create_fixtures.sql file in the ./database/ directory.
and finally your postgres service will change to use build:
postgres:
build: ./database/
...
Note: Sometimes you'll need a sleep 5 (or even better a script to poll and wait for postgresql to start) after the /etc/init.d/postgresql start line. In my experience either the init script or the psql client handles this for you, but I know that's not the case with mysql, so I thought I'd call it out.