I am not able to set Password for Postgres using Docker-compose. Postgres is loading without password and with the default user name "postgres", non of the environment variables below seems to applied. below is the db service of my docker-compose.yml file: (version 3)
db:
image: postgres
container_name: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: db
restart: unless-stopped
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
Note I tried using the "-POSTGRES_USER=" as well, it didn't work
Also, I deleted all old containers/volumes.
Any idea?
The problem should be with the volume attached. When your container start it will add the credentials you give him, but then the volume will be attached and that will cause this information being rewritten.
For more information have a look at https://github.com/docker-library/postgres/issues/203#issuecomment-255200501.
The main reason being use of ':' instead of "=" in the environment section.
Ideally it should look like this:
db:
image: postgres
container_name: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD= pass
- POSTGRES_DB= db
restart: unless-stopped
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
Your configuration works fine for me. I suspect you are not using the complete set of correct credentials, which includes the username, password, and database name. If I take your example docker-compose.yaml and run it without modifications, I can connect to the database db like this with username user and password pass:
$ psql -h localhost -U user db
Password for user user:
psql (9.5.7, server 9.6.1)
WARNING: psql major version 9.5, server major version 9.6.
Some psql features might not work.
Type "help" for help.
db=#
Had the same issue.
Couldn't solve it for 2 weeks.
Read almost everything related to it.
And after I finished all PosgreSQL server related processes on local machine, everything goes well.
Start postgres instance:-
docker run --name postgres-0 -e POSTGRES_PASSWORD=mypassword -p 5433:5433 -d postgres
Now we can check docker all running container by this command:-
docker ps
Related
I have been spending 3-4 hours on this and still have not found a solution.
I can successfully run the docker container and use psql from the container bash, however, when I try to call the db from my local machine I continue to get this error message:
error role "postgres" does not exist
I have already tried editing "listen_addresses" in the postgresql.conf file from the container bash
My setup:
I am using a macbook - Monterey 12.4
my docker compose file:
version: '3.4'
services:
postgres:
image: postgres:latest
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres_db
- POSTGRES_USER=testUser
- POSTGRES_PASSWORD=testPW
volumes:
- postgres-data:/var/lib/postgresql/db
but this issue occurs if I do it through the standard CLI command as well, i.e:
docker run -d -p 5432:5432 --name my-postgres -e POSTGRES_PASSWORD=mysecretpassword postgres
I tried to follow this tutorial but it didnt work:
[https://betterprogramming.pub/connect-from-local-machine-to-postgresql-docker-container-f785f00461a7][1]
when I try this command:
psql -h localhost -p 5432 -U postgres -W
it doesnt work:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: role "postgres" does not exist
Also for reference, the user "postgres" does exist in postgres - as a superuser
Replace POSTGRES_USER=testUser with POSTGRES_USER=postgres in the compose configuration. Also use the password defined in POSTGRES_PASSWORD. Delete the old container and create a new one.
Thank you all for your help on this.
It turns out the issue was that I was running postgres on my local machine as well.
so once I turn that off I was able to connect.
I appreciate your time!
I have a docker image that's not accepting credentials for a user that is defined in the yaml docker-compose file. When I go to the docker console for the container and check users it only lists postgres. Not sure what I am missing - here's the yaml file:
version: '3.8'
services:
db:
container_name: drewreport_container
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: mpassword
POSTGRES_USER: thedrewreport
POSTGRES_DB: thedrewreportdb
ports:
- "5432:5432"
volumes:
- thedrewreportdata:/var/lib/postgresql/data/
volumes:
thedrewreportdata:
Any ideas?
I can't reproduce your problem. Running docker-compose up, I see:
Creating network "docker_default" with the default driver
Creating volume "docker_thedrewreportdata" with default driver
Creating docker_client_1 ...
Creating docker_db_1 ...
Creating docker_client_1 ... done
Creating docker_db_1 ... done
Attaching to docker_db_1, docker_client_1
[...]
db_1 | 2021-07-19 23:03:39.676 UTC [1] LOG: database system is ready to accept connections
If I then connect with psql, I can authenticate using the username
and password you've defined in your docker-compose.yml:
# psql -h localhost -U thedrewreport thedrewreportdb
Password for user thedrewreport:
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
thedrewreportdb=# \du
List of roles
Role name | Attributes | Member of
---------------+------------------------------------------------------------+-----------
thedrewreport | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
thedrewreportdb=#
Note that any volumes specified in your docker-compose.yml will
persist between a docker-compose down and a docker-compose up, so
if you ever brought your stack up with different credentials, those
will never be replaced unless your explicitly destroy the volume by
running docker-compose down -v.
You can tell that docker-compose is re-using a volume if you don't
see a message like this when you run docker-compose up:
Creating volume "docker_thedrewreportdata" with default driver
If existing, the existing DB data you mount to /var/lib/postgresql/data/ will take precedence over the environment variables to initialize it.
You have 2 options:
Update the existing DB data to add your user / password / database. To do so you can use docker compose exec db bash and then connect using psql command to make your changes.
Delete or move your existing thedrewreportdata local volume, for instance updating it to ./thedrewreportdata_postgres:/var/lib/postgresql/data/
Once done, you can use docker compose exec db psql --username thedrewreport --dbname thedrewreportdb to doublecheck you can connect with your credentials to the updated DB.
I've created a postgreSQL db structure locally, I want this structure to be the table structure of a postgreSQL instance that I will be running from a docker instance (without manually restoring the dump). I stored the structure in a dump and would normally restore this using psql myDB < myDB_dump.sql.
I got a working PostgreSQL instance running on Docker - with myDB and a user that can access the db.
docker-compose.yml:
version: '3'
services:
db:
image: postgres
restart: unless-stopped
ports:
- ${DB_PORT}:${DB_PORT}
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_DATABASE}
Dockerfile:
# psql
RUN apt-get update && apt-get install -y postgresql-client
FROM postgres
ADD init.sql /docker-entrypoint-initdb.d
RUN ./scripts/init_db.sh
init.sql:
CREATE USER my_user WITH PASSWORD my_user;
CREATE DATABASE myDB;
GRANT ALL PRIVILEGES ON DATABASE myDB TO my_user;
Is the best way to implement an existing table structure into this docker instance through a dump file, if so: how? Is there a better way to copy a db structure? If so, what would be the best way?
I am banging my head for a while on this issue and can't find what the issue might be. Running Docker Desktop on Windows 10. I have one dotnetcore 3.1 api that connects to postgres. Both of these are being run in containers.
Everything seems to work except connection to the database. Since I looked at my docker-compose.yml milion times, I can't come up with any other idea.
Here is my connection string:
"Server=postgres;Port=5432;Database=IdentityManager;User Id=postgres;Password=12345678;"
Here is docker-compose.yml file:
version: '3'
services:
identityserver:
depends_on:
- "postgres"
container_name: identityserver
build:
context: ./my_project/
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT='Development'
ports:
- "5000:80"
postgres:
image: "postgres"
container_name: "postgres"
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345678
- POSTGRES_DB=IdentityManager
expose:
- "5432"
Everything builds up, but connection to database fails:
Unhandled exception. Npgsql.NpgsqlException (0x80004005): Exception while connecting identityserver
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (99): Cannot assign requested address [::1]:5432
The weirdest thing is that when I run postgres alone with this same configuration on docker-compose.yml, and run the application outside of container with slightly different connection string:
"Server=127.0.0.1;Port=5432;Database=IdentityManager;User Id=postgres;Password=12345678;"
I am able to connect to database.
I tried cleaning everything docker system prune -a, tried restarting Docker, restarting PC, but to no awail. Can anyone try to help?
Finally, I was able to resolve my own problem and it wasn't in the docker-compose.yml file at all. Somewhere in the application code, connection string was changed to look for localhost as a host instead of postgres.
After changing it back to postgres, everything was fine.
try to
links:
- postgres
Maybe it will help
I am currently playing around with openmaptiles (https://github.com/openmaptiles/openmaptiles) and I'd like to figure out how to import my own data into the resulting mbtiles. But first I'd like to look at how the postgres database it is using is structured.
I just can't figure out how I can connect to the postgres database using my GUI tool I have running locally.
I start postgres using the command provided on the help page:
docker-compose up -d postgres. Is it just not visible to outside of the docker container (I am also very new to docker)?
And is there a way to make it visible to my local system?
docker-compose up -d postgres refers to this part of the docker-compose.yaml file:
services:
postgres:
image: "openmaptiles/postgis:2.9"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- postgres_conn
ports:
- "5432"
env_file: .env
...
As you can see in the ports: section, there is no container - host port mapping here. To access this postgres database from your host try using "5432:5432". (notice that if you're already using this port on the host, you will have to pick an available one).
For more information on the docker-compose file reference and ports, check the docs.