Goal:
Run Postgres in docker by pulling postgres from docker hub (https://hub.docker.com/_/postgres)
Background:
I get a message when I tried running docker with postgres
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
I got info at https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/ about why.
Problem:
"Update your docker-compose.yml or corresponding configuration with the POSTGRES_HOST_AUTH_METHOD environment variable to revert back to previous behavior or implement a proper password." (https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/)
I don't understand the solution about how to solve the current situation.
Where can i find the dokcer-compose.yml?
Info:
*I'm newbie in PostGre and Docker
If you need to run PostgreSQL in docker you will have to use a variable in docker run command like this :
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The error is telling you the same.
Read more at https://hub.docker.com/_/postgres
Docker-compose.yml is just another option. You can run it just by docker run like in first answer. If you want use docker-compose, in documentation is example of it:
stack.yaml
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 8080:8080
and run: docker-compose -f stack.yml up.
Everything is here:
https://hub.docker.com/_/postgres
Related
I know lots of questions sound like this, and they all have the same answer: delete your volumes to force it to reinitialize.
The problem is, I'm being careful to delete my volumes, but it's consistently spinning up the container incorrectly every time.
My docker-compose.yml
version: "3.1"
services:
db:
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=changeme
- POSTGRES_USER=myuser
image: postgres
My process:
$ docker volume ls
DRIVER VOLUME NAME
$ docker-compose up -v # or docker-compose up --force-recreate
yet it always creates the "postgres" user instead of myuser. The output when it starts up shows that it "will be owned by user 'postgres'" and I can only docker exec as postgres, not my user.
The instructions seem very straightforward. Am I missing something, or is this a bug?
What happens when you use the compose file above?
I can only docker exec as postgres, not myuser
The environment variable POSTGRES_USER controls the database user, not the linux user. Take a look at the chapter Arbitrary --user Notes in the documentation to learn how to change the linux user.
I'm having trouble connecting postgresDB to my app using docker-compose
My understanding of docker compose is that I can combine two containers so that they can communicate with each other. Suppose I have an app in the appcontainer that runs the psql command (just a one liner python script with os.command("psql")). Since the app container does not have postgres installed, it won't be able to run psql by itself. However, I thought combining two containers in docker-compose.yml would let me run psql but apparently not.
What am I missing here?
I am using 2 postgres images because I'm trying to find regression bugs between two dbms
version: "3"
services:
app:
image: "app:1.0"
depends_on:
- postgres9
- postgres12
ports:
- 8080:80
postgres9:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_USER: postgres
POSTGRES_DB: test_bd
ports:
- '5432:5432'
postgres12:
image: postgres:12
environment:
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_USER: postgres
POSTGRES_DB: test_bd
ports:
- '5435:5435'
Each Docker container has a self-contained filesystem. You can never directly run commands from the host or from other containers' filesystems; anything you want to run needs to be installed in the container (really, in its image's Dockerfile).
If you want to run a tool like psql, it needs to be installed in your image. You don't say what your base image is, but if it's based on Debian or Ubuntu, you need to install the postgresql-client package:
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
postgresql-client
The right approach here is to add a standard Python PostgreSQL client library, like psycopg2, to your project's Python Pipfile, setup.py, and/or requirements.txt, and use that library instead of shelling out to psql. You will also need the PostgreSQL C library header files to install that package; instead of postgresql-client, install the Debian libpq-dev package.
In your case, those two containers with postgres instance in each, are running on different hosts (other than host with app). What you need is to specify correct host in psql command. It might look like (for postgres12 container):
PGPASSWORD="mysecretpassword" psql -h postgres12 -d test_bd -U postgres
Mac OS 10.13.6
docker 9.03.5, build 633a0ea
I create two docker containers, web and db using docker-compose.yml. It worked for several months. Recently I decided to rebuild containers from scratch, so I actually removed the existing ones and started over:
$ docker-compose -f docker-compose-dev.yml build --no-cache
$ docker-compose -f docker-compose-dev.yml up -d
Duiring the "build" run, building db container yields this:
DETAIL: The data directory was initialised by PostgreSQL version 9.6,
which is not compatible with this version 11.2.
exited with code 1
The db container does not start so I can not check what it's got inside.
My containers
are defined like this:
version: '3'
services:
web:
restart: unless-stopped
container_name: web
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "8000:8000"
environment:
DJANGO_SETTINGS_MODULE: '<my_app>.settings.postgres'
DB_NAME: 'my_db'
DB_USER: 'my_db_user'
DB_PASS: 'my_db_user'
DB_HOST: 'my_db_host'
PRODUCTION: 'false'
DEBUG: 'True'
depends_on:
- db
volumes:
- ./:/usr/src/app/
db:
image: postgres:11.2-alpine
volumes:
- myapp-db-dev:/var/lib/postgresql/data
environment:
- POSTGRES_DB=<my_db>
- POSTGRES_USER=<my_db_user>
- POSTGRES_PASSWORD=<my_db_password>
volumes:
myapp-db-dev:
My local postgresql is 11.3 (which should be irrelevant):
$ psql --version
psql (PostgreSQL) 11.3
and my local postgresql data directory was removed completely
$ rm -rf /usr/local/var/postgres
However, it's up-to-date:
$ brew postgresql-upgrade-database
Error: postgresql data already upgraded!
I read Stack Overflow 17822974 and Stack Overflow 19076980, those advices did not help.
How to fix this data incompatibility? If possible, I would like to avoid downgrading postgres. I don't even get what data it's talking about at that point, all the data is migrated later in a separate step.
It seems like on the first run Postgres 9.6 was specified as an image. So, the container was initialized and the data was put to the myapp-db-dev named volume. Then someone changed the version and you've got the error. The possible solution would be:
Temporary downgrade the version to the Postgres 9.6, e.g. specify postgres:9.6.
Go to the container and dump the data with pg_dump utility.
Change version to 11.2 and specify new volume (it's a good advice to use host volume).
Restore the data.
So I'm setting up with Docker swarm.
I am now cool with the docker stack deploy -c docker-compose.yml myapp command which replaces my former docker-compose up.
But one of my service is my DB and I need to pgrestore inside it.
Previously with compose, I would run:
docker-compose run --rm postgres pg_restore --rest-of-command
How can I do the same with stack deploy?
Unfortunately, the container created with compose is not the same as the one from stack deploy: the first one is called myapp_postgres while the second myapp_postgres.1.zamd6kb6cy4p8mtfha0gn50vh.
I guess I could write something like docker exec 035803286af0 but then I loose all the benefits of the config from docker-compose.yml, which in this case is:
postgres:
env_file:
- ./.env
image: postgres:11.0-alpine
volumes:
- "..:/app" # toe make the dump accessible to the container
- "/var/run/postgresql:/var/run/postgresql"
So this solution is not very IaC.
So ain't there a docker service run or something?
Thanks
You can follow docker image docs (Initialization scripts section):
and create *.sh script under /docker-entrypoint-initdb.d which will run pg_restore ... when Postgres container will run as part of the Docker service.
It doesn't seem to be a direct answer to your question, however it may achieve your goal of restoring the dump during Postgres initialization.
i have a problem with MongoDB. I am provisioning the file config.js to docker-entrypoint-initdb.d in my docker-compose file:
mongo:
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
MONGO_INITDB_DATABASE: dev
volumes:
- ./config.js:/docker-entrypoint-initdb.d/config.js
- mongodbdata:/data/db
The config.js file looks like this:
db.auth('root', 'example');
db = db.getSiblingDB('dev');
db.approver.insert({"email":"some#email.com,"approverType":"APPROVER"});
db.approver.insert({"email":"someother#email.com","approverType":"ACCOUNTANCY"});
When I run docker-compose up -d for the first time, everything is fine, the two entries are inserted into the database.
But then, I want to add a trird entry, and recreate the container:
db.auth('root', 'example');
db = db.getSiblingDB('dev');
db.approver.insert({"email":"some#email.com,"approverType":"APPROVER"});
db.approver.insert({"email":"someother#email.com","approverType":"ACCOUNTANCY"});
db.approver.insert({"email":"another#email.com","approverType":"ACCOUNTANCY"});
I run docker-compose up -d --force-recreate --no-deps mongo nothing happens. The container is recreated, but the 3rd entry is not there.
Running docker exec -it dev_mongo_1 mongo docker-entrypoint-initdb.d/config.js returns:
MongoDB shell version v4.0.10
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d44b8e0a-a32c-4da0-a02b-c3f71d6073dd") }
MongoDB server version: 4.0.10
Error: Authentication failed.
Is there a way to recreate the container so the script is re-runned? Or to run a mongo command that will re-run the script in a running container?
In mongodb's startup script there is check if initialization should be done:
https://github.com/docker-library/mongo/blob/40056ae591c1caca88ffbec2a426e4da07e02d57/3.6/docker-entrypoint.sh#L225
# check for a few known paths (to determine whether we've already initialized and should thus skip our initdb scripts)
if [ -n "$shouldPerformInitdb" ]; then
...
so probably it's done only once during DB initialization and then, as you keep DB state by using mongodbdata:/data/db, it won't initialize.
To fix that you can try to type docker-compose down -v what will delete data from your DB and let you run initialization once again.