Windows 10-Pro. Have a local Postgres installed and working fine.
With it running, VSC terminal, docker-compose up the following ok:
version: '3.8'
services:
postgres:
image: postgres:10.4.2
ports:
- '5432:5432'
volumes:
- ./sql:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass1
POSTGRES_DB: db
But PSQL shell always complain password authentication failed for user.
Stopping postgres service from Windows Services and docker-compose up, PSQL shell authentication and query ok. But VSC terminal keep complaining another thing:
FATAL: password authentication failed for user "postgres"
DETAIL: User "postgres" has no password assigned.
Connection matched pg_hba.conf line 95: "host all all all md5"
How to stop the above error when docker container's instance is running? Also, possible to co-run both local and docker?
Hope you are enjoying you containers journey !
I tried to execute your docker-compose as it was but cannot fetch the postgres:10.4.2 image:
❯ docker-compose up
[+] Running 0/1
⠿ postgres Error 2.1s
Error response from daemon: manifest for postgres:10.4.2 not found: manifest unknown: manifest unknown
so i decided to use postgres:14.2 instead. Since I dont have your sql script i'll comment out the volume section.
Here is how my docker-compose looks like:
version: '3.8'
services:
postgres:
image: postgres:14.2
ports:
- '5432:5432'
# volumes:
# - ./sql:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass1
POSTGRES_DB: db
So, when a execute the compose I got this:
❯ docker-compose up -d
[+] Running 1/1
⠿ Container postgre-local-and-dockercompose-71984505-postgres-1 Started
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
4b90573f6108 postgres:14.2 "docker-entrypoint.s…" 18 seconds ago Up 15 seconds 0.0.0.0:5432->5432/tcp postgre-local-and-dockercompose-71984505-postgres-1
When i connect to the container with:
❯ docker exec -it postgre-local-and-dockercompose-71984505-postgres-1 bash
root#4b90573f6108:/#
and execute this command to connect to your created DB and connect with the "pass1" password:
root#4b90573f6108:/# psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=$POSTGRES_DB
Password:
psql (14.2 (Debian 14.2-1.pgdg110+1))
Type "help" for help.
db=#
everything is fine.
So I advise you to use the same postgres:14.2 image i tried with (patched with the last security issues) and do the same test.
If you want me to test exactly what you are doing just send your sql scripts.
To answer your second question, yes it is possible to co-run both local and docker postgres instances
you just have to port-forward the postgresql port of your container to another port like this:
version: '3.8'
services:
postgres:
image: postgres:14.2
ports:
- '5433:5432'
# volumes:
# - ./sql:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass1
POSTGRES_DB: db
since there is no port conflict (your local db is running on 5432 and your docker db on 5433), everything will work fine (I will use dbeaver to try to connect ):
PERFECT !
Hope I answered your questions.
bguess.
Related
I am trying to connect to a Postgres instance running in a Docker container. In the docker-compose file, the postgres service looks like this:
flask-api-postgres:
container_name: flask-api-postgres
image: postgres:13.4-alpine
env_file:
- dev.env
ports:
- "5433:5433"
networks:
flask-network:
With docker inspect I get that the container has the address: 172.19.0.2.
The API works fine, but when trying to access the database from Pgadmin with the config shown in the image (user and password are correctly set), I get the shown error.
Pgadmin config
I do not know how to access the postgres instance from pgadmin.
One approach is you can access the postgres db docker container from pgadmin which is hosted in your host machine using 127.0.0.1 instead of 172.19.0.2
Another way is you can create another container for pgadmin. In this case, you can access your PostgreSQL using container IP (For example: 172.19.0.2). Add this to your docker-compose file
pgadmin:
image: dpage/pgadmin4
depends_on:
- flask-api-postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
networks:
flask-network:
Make sure both are under same network.
Please check the port you are using. The default is 5432.
See experiment:
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c4d92a623a6 postgres:latest "docker-entrypoint.s…" 14 minutes ago Up 14 minutes 5432/tcp, 0.0.0.0:5433->5433/tcp cannot-access-postgres-instance-running-in-docker-container-from-pgadmin-database-1
> docker exec -it 0c4d92a623a6 sh
# psql "host=127.0.0.1 port=5433"
psql: error: connection to server at "127.0.0.1", port 5433 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
# psql "host=127.0.0.1 port=5432"
psql: error: connection to server at "127.0.0.1", port 5432 failed: FATAL: role "root" does not exist
#
I've been created a docker container of postgres service, but on start it and try to connect in database I get erros like I didn't defined a user and database to Postgres instance, I already tried to change the docker-compose and find the poblem but I didn't find.
Follow the attachments:
Dockerfile:
FROM wyveo/nginx-php-fpm:latest
RUN chmod -R 775 /usr/share/nginx/
RUN export pwd=pwd
docker-compose.yml:
version: '3'
services:
laravel-app_prm:
build: .
ports:
- "8099:80"
volumes:
- ${pwd}/.docker/nginx/:/usr/share/nginx
postgres_prm:
image: postgres
restart: always
environment:
- POSTGRES_USER=db_usr
- POSTGRES_PASSWORD=postgres_password
- POSTGRES_DB=db_prm
ports:
- "5432:5440"
volumes:
- ${pwd}/.docker/dbdata:/var/lib/postgresql/data/
**when I try to connect to the database directly through the container's bash, I get an error that user and database, both being inserted in the same way as defined in docker-compose.yml, do not exist.
sudo docker container <postgres_container_id> bash
psql -h localhost -U db_usr
... and so on...
And to set up connection in pgAdmin I got the container IP using:
sudo docker container inspect <postgres_container_id>
and getting the value from IPAddress atribute.
I created docker-compose.yml which content you can find below. I navigate to the folder where file resist and run command:
docker-compose up -d
This was shown:
Starting postgres ... done
then i run that query:
docker-compose ps
Result:
Name Command State Ports
---------------------------------------------------------
postgres docker-entrypoint.sh postgres Exit 1
Now i wanted to run some command:
docker exec -it postgres psql -h localhost -p 54320 -U robert
This is what i get:
Error response from daemon: Container ae1565a84bcf0b3662b47d4f277efd2830273554b6bcf4437129e33b31c88b35 is not running
Is my container not running or? please of support.
docker-compose.yml:
version: "3"
services:
# Create a service named db.
db:
# Use the Docker Image postgres. This will pull the newest release.
image: "postgres"
# Give the container the name my_postgres. You can changes to something else.
container_name: "postgres"
# Setup the username, password, and database name. You can changes these values.
environment:
- POSTGRES_USER=robert
- POSTGRES_PASSWORD=robert
- POSTGRES_DB=mydb
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "54320:5432"
# Set a volume some that database is not lost after shutting down the container.
# I used the name postgres-data but you can changed it to something else.
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
Can you attempt exec
docker run -it postgres psql -h localhost -p 54320 -U robert
?
$ docker exec --help
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Since your container has the status exit, you can't use docker exec
Can you use this docker-compose file?
version: "3"
volumes:
postgres_app: ~
services:
# Create a service named db.
postgres:
image: "postgres"
environment:
POSTGRES_USER: robert
POSTGRES_PASSWORD: robert
POSTGRES_DB: "mydb"
volumes:
- "postgres_app:/var/lib/postgresql/data"
ports:
- "54320:5432"
restart: always
And this command docker-compose exec postgres psql -U robert -d mydb
I hope this will help!
On my computer i executed this file
I have arranged a node.js back end to connect to a redis cache and psql database.
The app I have created is running but I would like to do some database admin and have attempted to log in using pgAdmin - however, my details were rejected.
I thought it might be a pgAdmin thing so I attempted to use the login URI in powershell but again it was rejected.
I checked that the psql service is running on the exposed port (in case I messed up the docker-compose config) and it is...not sure where to go from here.
My docker-compose config for the database is:
# PostgreSQL
postgres:
container_name: postgres
build: ./postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_URL: postgres://admin:password#localhost:5432/myapp
POSTGRES_DB: myapp
POSTGRES_HOST: postgres
ports:
- "5432:5432"
I should note that the database is running - I can log in to my front end and access data, etc...
My login attempt:
psql postgres://admin:password#localhost:5432/myapp
And the response:
psql: FATAL: password authentication failed for user "admin"
I think you docker-compose not formatted well if it's not copy-paste issue as the environment variable, not place properly.
# PostgreSQL
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_URL: postgres://admin:password#localhost:5432/myapp
POSTGRES_DB: myapp
POSTGRES_HOST: postgres
ports:
- "5432:5432"
Or you can try
version: '3.7'
services:
postgresdb:
container_name: postgres
environment:
POSTGRES_DB: appdb
POSTGRES_USER: appdb
POSTGRES_PASSWORD: 123123
image: bitnami/postgresql:latest
ports:
- "5432:5432"
Or better to post you Dockerfile, as I see your building your own Docker image, but better to use the offical image of Postgres like the one I posted above.
Also will suggest debugging on container DB first and verify connectivity on the container localhost, debugging and testing with depended containers like connecting from nodejs first here one lost in the actual problem.
Check if your ENV set properly.
docker exec postgres bash -c "printenv "
or
docker exec postgres bash -c "printenv | grep POSTGRES_"
or
docker exec -it postgres bash -c "psql -U admin myapp"
I'm trying to create a database and connect to it within my container network. I don't want to have to ssh into a box to create users/databases etc, as this is not a scalable or easily distributable process.
This is what I have so far:
# docker-compose.yml
db:
image: postgres:9.4
volumes:
- ./db/init.sql:/docker-entrypoint-initdb/10-init.sql
environment:
- PGDATA=/tmp
- PGDATABASE=web
- PGUSER=docker
- PGPASSWORD=password
This is my init.sql file:
CREATE DATABASE web;
CREATE USER docker WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE web TO docker;
When I start up the container and try to connect to it, I get this error:
db_1 | FATAL: role "docker" does not exist
db_1 | done
db_1 | server started
db_1 | FATAL: database "web" does not exist
db_1 | psql: FATAL: database "web" does not exist
The first time this happened, I tried to create a role like this:
CREATE ROLE docker with SUPERUSER PASSWORD password;
GRANT web TO docker;
But it did not have any effect. To make matters even more confusing, when I use node-postgres to connect to the db, I get this error:
Error: connect ECONNREFUSED
But how can the connection be refused if the db service isnt even up??
In a nutshell, these are the questions I'm trying to solve:
How can I create a database using only the files in my project (i.e. no manual commands)?
How do I create a user/role using only the files in my project?
How do I connect to this database?
Thank you in advance.
How can I create a database using only the files in my project (i.e.
no manual commands)?
The minimal docker-compose.yml config for you defined user and database is:
postgres:
image: postgres:9.4
environment:
- POSTGRES_DB=web
- POSTGRES_USER=myuser
How do I create a user/role using only the files in my project?
To execute scripts on database initialization take a look at the official docs for initdb.
To get you started with a quick and dirty solution create a new file e.g. init_conf.sh in the same directory as your docker-compose.yml:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 -U "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE ROLE docker with SUPERUSER PASSWORD 'password';
EOSQL
And add the volumes directive to your docker-compose.yml.
volumes:
- .:/docker-entrypoint-initdb.d
Recreate your container because otherwise, you wouldn't trigger a new database initialization. That means, docker stop and docker rm the old one first before executing docker-compose up again. STDOUT gives you now some information about our newly introduced script.
How do I connect to this database?
To connect to your database with docker exec via the terminal:
docker exec -ti folder_postgres_1 psql -U myuser -d web
A docker-compose.yml in one of my production environments looks like the following:
services:
postgres:
logging: &logging
driver: json-file
options:
max-size: "10m"
max-file: "5"
build: ./docker/postgres # path to custom Dockerfile
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_backup:/backups
env_file: .env
restart: always
# ... other services like web, celery, redis, etc.
Dockerfile:
FROM postgres:latest
# ...
COPY *.sh /docker-entrypoint-initdb.d/
# ...
The environment variable you are using are wrong. Try this
version: '3.3'
services:
db:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=password
- POSTGRES_DB=web
volumes:
- db_data:/var/lib/postgresql/data
# optional port
ports: ["5555:5432"]
volumes:
db_data:
then from any other docker-compose service you can access the DB at db:5432 and from your host machine you can access postgres on localhost:5555 if you also add the ports