I am using Postgres 9.4.1 with a Docker 17.06.0 and docker-compose 1.14.0.
When I work with it, it often happens to just lose connection and in logs I get this:
LOG: invalid length of startup packet
LOG: could not send data to client: Broken pipe
FATAL: connection to client lost
ERROR: could not open file "base/1/11943": No such file or directory
FATAL: could not open file "base/12141/11943": No such file or directory
After restarting container it doesn't get any better:
postgres cannot access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
My only solution is to:
Stop container and remove it
Remove all data of volumes
Restart docker
Up container.
To be honest it's quite annoying process it. It happens always when I disconnect with my current network and sometimes just when I work with postgres.
Maybe I'm missing some permission configuration, this is my docker-compose.yml:
postgres:
image: postgres:9.4.1
ports:
- 54320:54320
environment:
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=xxx
volumes:
- /tmp/postgres/staging:/var/lib/postgresql/data
restart: always
Related
I'm using docker for the first time to set up a test database that my team can then use. I'm having some trouble getting my data on DBeaver after running my docker-compose file. The issue I'm facing is that my database does not show up in DBeaver (along with relevant Schemas and Tables that I also create/populate in my initialization sql script).
Here is my docker-compose.yml
version: "3"
services:
test_database:
image: postgres:latest
build:
context: ./
dockerfile: Dockerfile
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=test1234
- POSTGRES_DB=testdb
container_name: test_database
In this, I specify the docker file I want it to use for building. Here is the dockerfile:
# syntax = docker/dockerfile:1.3
FROM postgres:latest
ADD test_data.tar .
COPY init_test_db.sql /docker-entrypoint-initdb.d/
Now, when I run docker-compose build and docker-compose up, I can see through the logs that my SQL commands (CREATE, COPY, etc.) do get executed and the rows do get added. But when I connect to this instance through DBeaver, I can't see this at all. In fact, the only database on there is the default Postgres database, even through the logs say I'm connected to test_database.
I followed some other solutions and used docker volume prune as well, but that didn't affect anything (I read some solutions about clearing up volumes, and at that point, I had volumes: /tmp:/tmp as well). Any ideas?
Wow, this wasn't an error after all. All I had to do was go on the connection settings on DBeaver and check 'Show all databases' under the Postgres tab. Hope this can help someone :)
Let's say I have the following setup in my docker-compose.yml.
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
ports:
- 5432:5432
...
where ./database contains some SQL files that initialize the database. Here's my question... is initdb run every single time the stopped postgres container starts running again (via $ docker-compose up).
Thus, is it fair to say that every time I restart my postgres container, it builds the entire database from scratch all over again?
My guess is 'yes' as in the documentation it says
The default postgres user and database are created in the entrypoint with initdb.
The answer is no, when you stop your container it is not deleted, only stopped, you can start it when it is stopped the same when you stop your computer it will not vanish from your desk :)
You can even restart it when it is running, same as you would do with your computer.
However when you remove/delete the container with
docker rm -f containername
or
docker-compose rm
then it is truly deleted, equivalent of making your computer vanish from your desk.
But even than you can still persist your data with volume mounts, for example in your compose file your ./database directory will not be deleted from your host machine even when you delete the containers using it. It is the equivalent of using an external usb drive in your computer, so when you make your computer vanish from your desk with deleting it, you still have your usb drive with the data on it that was there when you still had your computer.
So you can persist your database files with the same technique in a volume mount like this:
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
- ./postgres-data/data:/var/lib/postgresql/data
ports:
- 5432:5432
...
This way when you delete your container(s) and do "docker-compose up" again for the same compose file, postgres will not run its init scirpt because the /var/lib/postgresql directory is already populated in it.
However, my computer analogy is valid only in this context, please do not think of containers being mini computers or mini virtual machines, they are not! But that's an other discussion.
I have a docker-compose file that looks like this
version: "3.7"
services:
app:
stdin_open: true
tty: true
build:
context: .
dockerfile: app.Dockerfile
volumes:
- ${HOST_SAVE_DIRC}:${CONTAINER_SAVE_DIRC}
depends_on:
- postgres
postgres:
image: 'postgres'
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST_AUTH_METHOD=trust
restart: always
expose:
- "5432"
where variables like POSTGRES_USER are entries from a env file. app.Dockerfile looks like
FROM python:3.8.3-slim-buster
COPY src /src/
COPY init.sql .
COPY .env .
COPY run.sh run.sh
COPY requirements.txt .
RUN ls -a
RUN pip install --no-cache-dir -r requirements.txt
The containers are created, then the user is logged into the app container w/ the main function of the program being called - this is when the database calls
From the app container I am attempting to connect to the postgres container via psycopg2. However when I attempt to do so, I receive the following error:
psycopg2.OperationalError: could not connect to server: No route to host
Is the server running on host "postgres" (172.22.0.2) and accepting
TCP/IP connections on port 5432?
using a psycopg2 call that looks like
with psy.connect(host='postgres', port=5432, user='postgres', password='postgres') as conn:
...
the entries of this psycopg2 call match the env file given to the docker-compose file.
My understanding is that Postgres uses port 5432 by default. Also that when docker-compose creates the two containers - it creates a docker network for those containers name DIR_default where DIR is the name of the directory the docker-compose file lives in, where each container can be accessed with using the name listed in the docker-compose file ('postgres' and 'app' in these cases).
Among various tries:
I've checked and the database isn't going down between the container being created and the user being exec'd in.
I've tried various little changes like changing the container names, postgres login info, etc.
I've tried linking the postgres container name explicitly with link: "postgres:postgres".
Other solutions suggested here
Any help would be greatly appreciated! I see no reason why something as simple as this should be occurring, but also here I am.
Edit:
Pinging the Postgres container from the app container appears to be working when running docker exec app ping postgres_container_name. Is this a sign that the Docker network is set up correctly and the issue is something of mine?
Edit 2:
Tried clearing all images and containers, then restarting the Docker daemon and afterwards my PC. No change in either case.
For reference, the ping command looked like
docker exec python-app ping name_given_to_postgres_container
returning various statements which looked like
64 bytes from name_given_to_postgres_container.project_name_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.090 ms
which unless I am mistaken, I believe is signalling a succesful ping.
The top level .env file provided to docker-compose
HOST_SAVE_DIRC=~/python_projects/project_directory/directory_in_project
CONTAINER_SAVE_DIRC=/pdfs
POSTGRES_DB=project_name # same as project_directory
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_PORT=5432
Here is the requirements.txt file for the Python app as well
certifi==2020.4.5.1
chardet==3.0.4
idna==2.9
psycopg2-binary==2.8.5
read-env==1.1.0
requests==2.23.0
urllib3==1.25.9
Exec-ing into the Postgres container with docker exec -it container_id bash and running psql -U postgres appears to be successful - even with restart: always removed. I can also see the database named in the docker-compose file is also created. I feel confident in saying this container isn't dying spontaneously.
However, hitting the 5432 port on the Postgres container with netcat via nc name_given_to_postgres_container 5432-5433 returns an error similar to the one returned by psycopg2
arxivist_postgres_1 [172.22.0.3] 5433 (?) : No route to host
arxivist_postgres_1 [172.22.0.3] 5432 (postgresql) : No route to host
The same error is also returned with curl. So my guess the issue isn't with the Postgres container directly, psycopg2, or the host-name - but something with the port?
Edit 3:
As a last attempt to fix this project, the full project this post is referring to is posted at this link. If anyone would like to download the repo and try building the docker containers themselves via ./start.sh - that might be just what is needed to find a solution!
I thought I had Docker setup on my machine, which runs Fedora 32. However as I came to realize from this article, setting up Docker on Fedora 32 requires some extra steps I was not previously aware of.
Specifically for this issue, the command listed in the article to add Docker to whitelist Docker on the local network's firewall with the command
sudo firewall-cmd --permanent --zone=FedoraWorkstation --add-masquerade
So I believe the root cause of my issue was simply my app container being blocked from accessing the postgres container by the firewall. Making the above change made the program work finally!
After reading
How to persist data in a dockerized postgres database using volumes
I know mount a volume to local host folder, is a good way to prevent data loss, in case fatal happens in docker process.
I'm using Windows as host and Linux based docker image. Windows is having problem to mount postgres data (Linux based docker image) to Windows host directory - Mount Postgres data to Windows host directory
Later, I discover another technique
postgres:
build:
context: ./postgres
dockerfile: Dockerfile
restart: always
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data: {}
I can use a non external volume (I have no idea what is that yet)
After implement such, I try to perform
docker-compose down
docker-compose up
The data is still there.
May I know,
Is this still a good way to persist data in dockerized postgres?
Where exactly data is being stored? Is it some hidden directory in host machine?
Is this still a good way to persist data in dockerized postgres?
no but olso yes ....postgres is DB so data should be externalized to avoid data-loss in case of connection failure to the container etc...
But a good practice would be to have 2 DB 1 on containers 1 on host or in 2 containers (then with data inside containers) in master/slave mode synchro to have high availability for container maintenance for example but this is high availability only not backup ! :) if it doesn't exist you have to create it of course :-)
Where exactly data is being stored? Is it some hidden directory in
host machine?
No it is where you share the /var/lib/postgres so in your example in a directory called postgres_data on host
(use full path is a good practice & then you saw/guess by yourself where it was define in your file) :)
I have this docker-compose:
version: "2"
services:
api:
build: .
ports:
- "3007:3007"
links:
- mongo
mongo:
image: mongo
volumes:
- /data/mongodb/db:/data/db
ports:
- "27017:27017"
The volumes, /data/mongodb/db:/data/db, is the first part (/data/mongodb/db) where the data is stored inside the image and the second part (/data/db) where it's stored locally?
It works on production (ubuntu) but when i run it on my dev-machine (mac) I get:
ERROR: for mongo Cannot start service mongo: error while creating mount source path '/data/mongodb/db': mkdir /data/mongodb: permission denied
Even if I run it as sudo. I've added the /data directory in the "File Sharing"-section in the docker-program on the mac.
Is the idea to use the same docker-compose on both production and development? How do I solve this issue?
Actually it's the other way around (HOST:CONTAINER), /data/mongodb/db is on your host machine and /data/db is in the container.
You have added the /data in the shared folders of your dev machine but you haven't created /data/mongodb/db, that's why you get a permission denied error. Docker doesn't have the rights to create folders.
I get the impression you need to learn a little bit more about the fundamentals of Docker to fully understand what you are doing. There are a lot of potential pitfalls running Docker in production, and my recommendation is to learn the basics really well so you know how to handle them.
Here is what the documentation says about volumes:
[...] specify a path on the host machine (HOST:CONTAINER)
So you have it the wrong way around. The first part is the past on the host, e.g. your local machine, and the second is where the volume is mounted within the container.
Regarding your last question, have a look at this article: Using Compose in production.
Since Docker-Compose syntax version 3.2, you can use a long syntax of the volume property to specify the type of volume. This allows you to create a "Bind" volume, which effectively links a folder from a container to a folder in your host.
Here is an example :
version : "3.2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- type: bind
source: /data
target: /data/db
ports:
- "42421:27017"
source is the folder in your host and target the folder in your container
More information avaliable here : https://docs.docker.com/compose/compose-file/#long-syntax