I have volume
- ./var/volume/postgres/db:/var/lib/postgresql/data
for postgres container:
image: postgres:10
And I want to indicate an external folder from another disk
-/media/ubuntuuser/Data/data/db:/var/lib/postgresql/data
but the path out of working dir not works for me
Can I fix it somehow?
Related
Why do the contents of my EFS-backed volume vary depending on the container reading it?
I'm seeing divergent, EFS-related behavior depending on whether I run two processes in a single container or each in their own containers.
I'm using the Docker Compose ECS integration to launch the containers on Fargate.
The two processes are Database and Verifier.
Verifier directly inspects the on-disk storage of Database.
For this reason they share a volume, and the natural docker-compose.yml looks like this (simplifying):
services:
database:
image: database
volumes:
- database-state:/database-state
verifier:
image: verifier
volumes:
- database-state:/database-state
depends_on:
- database
volumes:
database-state: {}
However, if I launch in this configuration the volume database-state is often in an inconsistent state when read by Verifier, causing it to error.
OTOH, if I combine the services so both Database and Verifier run in the same container there are no consistency issues:
services:
database-and-verifier:
image: database-and-verifier
volumes:
- database-state:/database-state
volumes:
database-state: {}
Note that in both cases the database state is stored in database-state. This issue doesn't appear if I run locally, so it is specific to Fargate / EFS.
Any ideas what's going on and how to fix it?
This feels to me like a write-caching issue, but I doubt EFS would have such a basic problem.
It also feels like it could be a permissions issue, where key files are somehow hid from Verfier.
Thanks!
Well.
I have docker-compose.yaml with Postgres image (it is simple sample)
And I have NodeJS-script with raw SQL-query to Postgres:
'COPY (SELECT * FROM mytable) to ‘/var/lib/postgresql/data/mytable.csv‘'
What happening?
mytable.csv saved into Postgres container
What I need?
Save mytable.csv to HOST MACHINE (or another container from docker-compose)
Anyway, context: I have big tables (1m+ rows) and it necessary to generate and save files by Postgres server. But this process (saving) will start via NodeJS script with "COPY"-query in other container / host machine.
Does you know information about how to do this things?
my docker-compose.yml:
version: "3.6"
services:
postgres:
image: postgres:10.4
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
volumes:
- postgres-storage:/var/lib/postgresql/data
ports:
- "5432:5432"
UPDATE:
I did some graphic in Miro for my process. The main problem in THIRD:I can't return .csv file to NodeJS or save it into NodeJS container. I can do 2 things:
Return rows for forming file in NodeJS (but NodeJS server will do it slowly)
Save .CSV file in Postgres container. But I need a .CSV file into a NodeJS container
Schema with two containers that I need
Well, thanks for man, who linked this question with question about COPY TO STDOUT (sorry, didn't remember question ID).
So, problem solved by using COPY TO STDOUT and small npm-module pg-copy-streams
The code:
const client = new Client(config);
await client.connect();
const output = fs.createWriteStream('./output.csv');
const result = client.query(copyTo('COPY (select * from my_table) TO STDOUT WITH (FORMAT CSV, HEADER)'));
result.pipe(output);
So, Postgres will sent csv file in stream to host NodeJS script, and on the NodeJS-side we need to only write this stream in file without converting to csv.
Thanks!
In postgresql which are the directories we need to persist in general so that i can use the same data again even i rebuild
Like:
I know the main directory:
/var/lib/postgres or /var/lib/postgres/data (small confusion, which one)
and any other like the logs etc
You can define the PGDATA environment variable in your docker container to specify where postgres will save its database files.
From the documentation of the official postgres Docker image:
PGDATA:
This optional variable can be used to define another location - like a
subdirectory - for the database files. The default is
/var/lib/postgresql/data, but if the data volume you're using is a
filesystem mountpoint (like with GCE persistent disks), Postgres
initdb recommends a subdirectory (for example
/var/lib/postgresql/data/pgdata ) be created to contain the data.
Additionally from the postgres documentation transaction log files are also written to PGDATA:
By default the transaction log is stored in a subdirectory of the
main Postgres data folder (PGDATA).
So by default the postgres image will write database files to /var/lib/postgresql/data
To answer your question it should be sufficient to bind mount a directory to /var/lib/postgresql/data inside of your postgres container.
I have a docker-compose file which creates a volume and grafana... it is working fine on my system, but when a friend executes the script, it says:
GF_PATHS_DATA=/var/lib/grafana/ is not writeable
The volume is created with this code:
volumes:
- c:/GrafanaData/:/var/lib/grafana/
If we change it to
volumes:
- c:/GrafanaData/:/test/
It works on his system.
I dont have this error, but he has.
EDIT: we solved it. The problem was that he had drive C as a shared drive, but changed his password. He had to "reassign" the shared drive
Does c:/GrafanaData exist on his system? Is it writeable?
When you run the image without the volume mounted at /var/lib/grafana/, Grafana will be able to write into the empty directory that's there in the image.
I'm using the bitnami/postgresql:9.6 docker image to start postgresql DB. I want to persist data between restarts of containers and I used named volumes. Here is my docker file config:
postgresql:
image: 'bitnami/postgresql:9.6'
ports:
- 5432
environment:
- POSTGRESQL_REPLICATION_MODE=<name>
- POSTGRESQL_REPLICATION_USER=<name>
- POSTGRESQL_REPLICATION_PASSWORD=<name>
- POSTGRESQL_USERNAME=<name>
- POSTGRESQL_PASSWORD=<name>
- POSTGRESQL_DATABASE=<name>
- POSTGRES_INITDB_ARGS="--encoding=utf8"
volumes:
- volume-postgresql:/bitnami/postgresql/data
volumes:
volume-postgresql:
but when I restart container I get following error:
postgresql | nami INFO Initializing postgresql
postgresql | Error executing 'postInstallation': initdb: directory "/opt/bitnami/postgresql/data" exists but is not empty
postgresql | If you want to create a new database system, either remove or empty
postgresql | the directory "/opt/bitnami/postgresql/data" or run initdb
postgresql | with an argument other than "/opt/bitnami/postgresql/data".
Can you please help me to find what is the problem? Actually I expected that volumes are used for such purposes... Probably I make something wrong
Ok, it looks like I used wrong directory. Based on this site https://hub.docker.com/r/bitnami/postgresql/ I should use /bitnami instead of /bitnami/postgresql/data