I'm using the bitnami/postgresql:9.6 docker image to start postgresql DB. I want to persist data between restarts of containers and I used named volumes. Here is my docker file config:
postgresql:
image: 'bitnami/postgresql:9.6'
ports:
- 5432
environment:
- POSTGRESQL_REPLICATION_MODE=<name>
- POSTGRESQL_REPLICATION_USER=<name>
- POSTGRESQL_REPLICATION_PASSWORD=<name>
- POSTGRESQL_USERNAME=<name>
- POSTGRESQL_PASSWORD=<name>
- POSTGRESQL_DATABASE=<name>
- POSTGRES_INITDB_ARGS="--encoding=utf8"
volumes:
- volume-postgresql:/bitnami/postgresql/data
volumes:
volume-postgresql:
but when I restart container I get following error:
postgresql | nami INFO Initializing postgresql
postgresql | Error executing 'postInstallation': initdb: directory "/opt/bitnami/postgresql/data" exists but is not empty
postgresql | If you want to create a new database system, either remove or empty
postgresql | the directory "/opt/bitnami/postgresql/data" or run initdb
postgresql | with an argument other than "/opt/bitnami/postgresql/data".
Can you please help me to find what is the problem? Actually I expected that volumes are used for such purposes... Probably I make something wrong
Ok, it looks like I used wrong directory. Based on this site https://hub.docker.com/r/bitnami/postgresql/ I should use /bitnami instead of /bitnami/postgresql/data
Related
I have volume
- ./var/volume/postgres/db:/var/lib/postgresql/data
for postgres container:
image: postgres:10
And I want to indicate an external folder from another disk
-/media/ubuntuuser/Data/data/db:/var/lib/postgresql/data
but the path out of working dir not works for me
Can I fix it somehow?
I have a SpringBoot application container myApi that depends on another SpringBoot application container configApi, they both use flyway. They also both depend on a postgres container. configApi exposes an endpoint that myApi uses to fetch all relevant configs (db details etc).
What currently happens is:
postgres container starts and inits appropriate db's and user
configApi container starts
a) it connects to postgres
b) it runs a flyway migration (creates required schema and tables)
c) api launches and is ready
myApi container starts
a) it hits a config endpoint exposed by configApi
b) the request fails because configApi cannot find any useful data in postgres since none was inserted
My restrictions are:
I cannot modify configApi code to contain anything specific to myApi or an environment
Flyway migration during configApi launch is what creates the tables that would contain any required data
I cannot create the tables and populate them when postgres is launched (using init.sql) because then configApi flyway migration will fail
myApi cannot contain any hard coded or environmental info about postgres since it's all supposed to be fetched from configApi endpoints
Problem summary TLDR:
How do I execute a sql script against the postgres container after configApi has launched but before myApi has launched without modifying configApi or myApi to contain anything specific to each other's environments?
I have the following docker-compose file:
version: "3"
volumes:
db_data:
services:
postgres:
image: postgres:10.14
volumes:
- ./init-local.sql:/docker-entrypoint-initdb.d/init.sql
- db_data:/var/lib/postgresql
ports:
- 5432:5432
configApi:
image: org/config-api:latest
ports:
- 8234:8234
environment:
- DB_PORT=5432
- DB_HOST=postgres
depends_on:
- postgres
myApi:
build: ./my-api
image: org/my-api:latest
container_name: my-api
ports:
- 9080:9080
environment:
- CONFIG_MANAGER_API_URL=http://configApi:8234/
depends_on:
- postgres
- configApi
Notes (I'll be adding more as questions come in):
I am using a single postgres container because this is for local/test, both api's use unique db's within that postgres instance
So here's my solution.
I modified my flyway code to dynamically include extra scripts if they exists as follows.
In my database java config in configApi I read an env variable that specifies any dir with extra/app external scripts:
// on class level
#Value("${FLYWAY_FOLDER}")
private String externalFlywayFolder;
//when creating DataSource bean
List<String> flywayFolders = new ArrayList<>();
flywayFolders.add("classpath:db/migrations");
if (externalFlywayFolder != null && !externalFlywayFolder.isEmpty()) {
flywayFolders.add("filesystem:"+externalFlywayFolder);
}
String[] flywayFoldersArray = new String[flywayFolders.size()];
flywayFolders.toArray(flywayFoldersArray);
Flyway flyway = Flyway
.configure()
.dataSource(dataSource)
.baselineOnMigrate(true)
.schemas("flyway")
.mixed(true)
.locations(flywayFoldersArray)
.load();
flyway.migrate();
Then I modified the docker compose to attach extra files to the container and set the FLYWAY_FOLDER env variable:
configApi:
image: org/config-api:latest
volumes:
- ./scripts/flyway/configApi:/flyway #attach scripts to container
ports:
- 8234:8234
environment:
- DB_PORT=5432
- DB_HOST=postgres
- FLYWAY_FOLDER=flyway #specify script dir for api
depends_on:
- postgres
Then just a case of adding the files, but the trick is to make them repeatable migrations so they don't interfere with any versioned migrations that may be done for the configApi itself.
Repeatable migrations are applied after versioned migrations, also they get reapplied if their checksum changes.
In my project we use influx dB and Grafana for our log and other analysis which is running on an Ubuntu machine. Now recently due to a migration process, the ports were blocked like 3000(for Grafana) and 8086 (for influx dB) which will be remain blocked for some security reason. So, I am unable to connect them through the browser and postman.
So as a worked around we are planning to move these (at least the dashboards) to a local setup. I checked the process are up and running.
But unable to locate the physical location of the dashboard files.
I have a default setting, don't have any separate database configuration for grafana.
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url properties.
# Either "mysql", "postgres" or "sqlite3", it's your choice
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
;password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret#host:port/database
;url =
# For "postgres" only, either "disable", "require" or "verify-full"
;ssl_mode = disable
;ca_cert_path =
;client_key_path =
;client_cert_path =
;server_cert_name =
Is there any location where I can find these JSON file?
I figure it out by some research, thought I can help the community if someone same searching for the answer.
The default folder of the dashboard is /var/lib/grafana. If you navigate to the folder, you will find a file name grafana.db.
Download this file to your local machine or any machine which you want.
Please download sqlitebrowser from here.
Now on the sqlitebrowser click on open database and select the grafana.db file. And right click on the dashboard table and select Browse Table. and select the data section and you will find the dashboard.
Look at whatever is starting grafana on your machine. It could be set up as a service, or run from a script like .bashrc potentially, or from docker. It does look like /var/lib/grafana could be the default place.
In my case on a RPi with influx, grafana etc using IOTstack, it is started from docker with docker-compose. The docker-compose.yml file defines the paths. It does look like /var/lib/grafana is the default place, but this can be remapped somewhere else. And it's likely to be mapped somewhere else in order to be backed up, ~/IOTstack/volumes/grafana/data, in my case.
grafana:
container_name: grafana
image: grafana/grafana
restart: unless-stopped
user: "0"
ports:
- "3000:3000"
environment:
- GF_PATHS_DATA=/var/lib/grafana
- GF_PATHS_LOGS=/var/log/grafana
volumes:
- ./volumes/grafana/data:/var/lib/grafana
- ./volumes/grafana/log:/var/log/grafana
networks:
- iotstack_nw`
Well.
I have docker-compose.yaml with Postgres image (it is simple sample)
And I have NodeJS-script with raw SQL-query to Postgres:
'COPY (SELECT * FROM mytable) to ‘/var/lib/postgresql/data/mytable.csv‘'
What happening?
mytable.csv saved into Postgres container
What I need?
Save mytable.csv to HOST MACHINE (or another container from docker-compose)
Anyway, context: I have big tables (1m+ rows) and it necessary to generate and save files by Postgres server. But this process (saving) will start via NodeJS script with "COPY"-query in other container / host machine.
Does you know information about how to do this things?
my docker-compose.yml:
version: "3.6"
services:
postgres:
image: postgres:10.4
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
volumes:
- postgres-storage:/var/lib/postgresql/data
ports:
- "5432:5432"
UPDATE:
I did some graphic in Miro for my process. The main problem in THIRD:I can't return .csv file to NodeJS or save it into NodeJS container. I can do 2 things:
Return rows for forming file in NodeJS (but NodeJS server will do it slowly)
Save .CSV file in Postgres container. But I need a .CSV file into a NodeJS container
Schema with two containers that I need
Well, thanks for man, who linked this question with question about COPY TO STDOUT (sorry, didn't remember question ID).
So, problem solved by using COPY TO STDOUT and small npm-module pg-copy-streams
The code:
const client = new Client(config);
await client.connect();
const output = fs.createWriteStream('./output.csv');
const result = client.query(copyTo('COPY (select * from my_table) TO STDOUT WITH (FORMAT CSV, HEADER)'));
result.pipe(output);
So, Postgres will sent csv file in stream to host NodeJS script, and on the NodeJS-side we need to only write this stream in file without converting to csv.
Thanks!
In postgresql which are the directories we need to persist in general so that i can use the same data again even i rebuild
Like:
I know the main directory:
/var/lib/postgres or /var/lib/postgres/data (small confusion, which one)
and any other like the logs etc
You can define the PGDATA environment variable in your docker container to specify where postgres will save its database files.
From the documentation of the official postgres Docker image:
PGDATA:
This optional variable can be used to define another location - like a
subdirectory - for the database files. The default is
/var/lib/postgresql/data, but if the data volume you're using is a
filesystem mountpoint (like with GCE persistent disks), Postgres
initdb recommends a subdirectory (for example
/var/lib/postgresql/data/pgdata ) be created to contain the data.
Additionally from the postgres documentation transaction log files are also written to PGDATA:
By default the transaction log is stored in a subdirectory of the
main Postgres data folder (PGDATA).
So by default the postgres image will write database files to /var/lib/postgresql/data
To answer your question it should be sufficient to bind mount a directory to /var/lib/postgresql/data inside of your postgres container.