I have a docker-compose file which creates a volume and grafana... it is working fine on my system, but when a friend executes the script, it says:
GF_PATHS_DATA=/var/lib/grafana/ is not writeable
The volume is created with this code:
volumes:
- c:/GrafanaData/:/var/lib/grafana/
If we change it to
volumes:
- c:/GrafanaData/:/test/
It works on his system.
I dont have this error, but he has.
EDIT: we solved it. The problem was that he had drive C as a shared drive, but changed his password. He had to "reassign" the shared drive
Does c:/GrafanaData exist on his system? Is it writeable?
When you run the image without the volume mounted at /var/lib/grafana/, Grafana will be able to write into the empty directory that's there in the image.
Related
I have volume
- ./var/volume/postgres/db:/var/lib/postgresql/data
for postgres container:
image: postgres:10
And I want to indicate an external folder from another disk
-/media/ubuntuuser/Data/data/db:/var/lib/postgresql/data
but the path out of working dir not works for me
Can I fix it somehow?
Is there a way to migrate from a docker-compose configuration using all anonymous volumes to one using named volumes without needing manual intervention to maintain data (e.g. manually copying folders)? This could entail having users run a script on the host machine but there would need to be some safeguard against a subsequent docker-compose up succeeding if the script hadn't been run.
I contribute to an open source server application that users install on a range of infrastructure. Our users are typically not very technical and are resource-constrained. We have provided a simple docker-compose-based setup. Persistent data is in a containerized postgres database which stores its data on an anonymous volume. All of our administration instructions involve stopping running containers but not bringing them down.
This works well for most users but some users have ended up doing docker-compose down either because they have a bit of Docker experience or by simple analogy to up. When they bring their server back up, they get new anonymous volumes and it looks like they have lost their data. We have provided instructions for recovering from this state but it's happening often enough that we're reconsidering our configuration and exploring transitioning to named volumes.
We have many users happily using anonymous volumes and following our administrative instructions exactly. These are our least technical users and we want to make sure that they are not negatively affected by any change we make to the docker-compose configuration. For that reason, we can't "just" change the docker-compose configuration to use named volumes and provide a script to migrate data. There's too high of a risk that users would forget/fail to run the script and end up thinking they had lost all their data. This kind of approach would be fine if we could somehow ensure that bringing the service back up with the new configuration only succeeds if the data migration has been completed.
Side note for those wondering about our choice to use a containerized database: we also have a path for users to specify an external db server (e.g. RDS) but this is only accessible to our most resourced users.
Edit: Here is a similar ServerFault question.
Given that you're using an official PostgreSQL image, you can exploit their database initialization system
If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.
with a change of PGDATA
This optional variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data. If the data volume you're using is a filesystem mountpoint (like with GCE persistent disks) or remote folder that cannot be chowned to the postgres user (like some NFS mounts), Postgres initdb recommends a subdirectory be created to contain the data.
to solve the problem. The idea is that you define a different location for Postgres files and mount a named volume there. The new location will be empty initially and that will trigger database initialization scripts. You can use this to move data from anonymous volume and do this exactly once.
I've prepared an example for you to test this out. First, create a database on an anonymous volume with some sample data in it:
docker-compose.yml:
version: "3.7"
services:
postgres:
image: postgres
environment:
POSTGRES_PASSWORD: test
volumes:
- ./test.sh:/docker-entrypoint-initdb.d/test.sh
test.sh:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "postgres" --dbname "postgres" <<-EOSQL
CREATE TABLE public.test_table (test_column integer NOT NULL);
INSERT INTO public.test_table VALUES (1);
INSERT INTO public.test_table VALUES (2);
INSERT INTO public.test_table VALUES (3);
INSERT INTO public.test_table VALUES (4);
INSERT INTO public.test_table VALUES (5);
EOSQL
Note how this test.sh is mounted, it should be in /docker-entrypoint-initdb.d/ directory in order to be executed at the initialization stage. Bring the stack up and down to initialize the database with this sample data.
Now create a script to move the data:
move.sh:
#!/bin/bash
set -e
rm -rf $PGDATA/*
mv /var/lib/postgresql/data/* "$PGDATA/"
and update the docker-compose.yml with a named volume and a custom location for data:
docker-compose.yml:
version: "3.7"
services:
postgres:
image: postgres
environment:
POSTGRES_PASSWORD: test
# set a different location for data
PGDATA: /pgdata
volumes:
# mount the named volume
- pgdata:/pgdata
- ./move.sh:/docker-entrypoint-initdb.d/move.sh
volumes:
# define a named volume
pgdata: {}
When you bring this stack up it won't find the database (because named volume is initially empty) and Postgres will run initialization scripts. First it runs its own script to create an empty database then it runs custom scripts from the /docker-entrypoint-initdb.d directory. In this example I mounted move.sh into that directory, which will erase temporary database and move old database to the new location.
In postgresql which are the directories we need to persist in general so that i can use the same data again even i rebuild
Like:
I know the main directory:
/var/lib/postgres or /var/lib/postgres/data (small confusion, which one)
and any other like the logs etc
You can define the PGDATA environment variable in your docker container to specify where postgres will save its database files.
From the documentation of the official postgres Docker image:
PGDATA:
This optional variable can be used to define another location - like a
subdirectory - for the database files. The default is
/var/lib/postgresql/data, but if the data volume you're using is a
filesystem mountpoint (like with GCE persistent disks), Postgres
initdb recommends a subdirectory (for example
/var/lib/postgresql/data/pgdata ) be created to contain the data.
Additionally from the postgres documentation transaction log files are also written to PGDATA:
By default the transaction log is stored in a subdirectory of the
main Postgres data folder (PGDATA).
So by default the postgres image will write database files to /var/lib/postgresql/data
To answer your question it should be sufficient to bind mount a directory to /var/lib/postgresql/data inside of your postgres container.
I'm trying to build a Nextcloud server on my raspberry pi connected to an external disk. Installation worked. But during the setup I want to change the data directory (where all the files will be stored) to my external disk. But setup said: impossible to create or write in the data directory /media/pi/HCLOUD/nextcloudData/
I found the solution !
First we need to install nextcloud on the sd card (like that : https://www.marksei.com/how-to-install-nextcloud-13-on-ubuntu/).
If your disk is in ntfs format you really need to format it into ext4 - that gives to linux the possibilty of changing permissions on the disk.
Then mount it on this folder : /var/nextcloud and to move the data repo on the external HDD we need to follow this tutorial (step : moving nextcloud data folder): https://pimylifeup.com/raspberry-pi-nextcloud-server/
try doing sudo chown pi:www-data /media/pi/HCLOUD/nextcloudData/ and then change the data dir.
I receive the same error when I try to create a database with
CREATE DATABASE dwh;
and
createdb dwh;
namely:
createdb: database creation failed: ERROR: could not create directory "base/16385": No space left on device
and
ERROR: could not create directory "base/16386": No space left on device
I am using a postgres AMI on aws (PostgreSQL/Ubuntu provided by OpenLogic)https://aws.amazon.com/marketplace/ordering/ref=dtl_psb_continue?ie=UTF8&productId=13692aed-193f-4384-91ce-c9260eeca63d®ion=eu-west-1
provisioned with m2.xlarge machine, which should have 17GB RAM and 350GB SSD
Based on the description provided, you have not mapped your Postgres /data directory to your actual 350GB partition.
If you are running production server, 1st of all - try to clean up the logs (/pg_log folder) to save disk space and bring up the box to normal operation AND create backup of your database.
Run df -h to see disk devices utilization and lsblk what is mounted to your disk. It highly likely, that AWS by default gave you not extended 350GB volume. You have 2 options:
Add new disk take a look at Ubuntu add new drive procedure and map it to your Postgres /data folder
Try to do perform resize of the existing file system with resize2fs, relevant answer can be found at AskUbuntu