Our project uses TimescaleDB on PostgreSQL in Azure Container Instance. But when I change the settings, for example, max_connections in the var/lib/postgresql/data/postgresql.conf file, the value returns to the previous state after the reboot (I used vi editor to modify). I am building a container via Dockerfile.
FROM timescale/timescaledb:latest-pg12
ENV POSTGRES_USER=admin \
POSTGRES_DB=timescaledb \
POSTGRES_PASSWORD=test123# \
Is there some environment variable to set these values? What is the best way to store the database, is it is possible to transfer DB to the Azure Storage?
If you are managing Dockerfile yourself, then you can fix modify postgresql.conf there. For example, add the following line in the Dockerfile, which will set max_connections to 100:
sed -r -i "s/[#]*\s*(max_connections)\s*=\s*0/\1 = 100/" /usr/local/share/postgresql/postgresql.conf.sample
Also I suggest to check Azure documentation on configuring PostgreSQL server. For example, to update max_connections server configuration parameter of server mydemoserver.postgres.database.azure.com under resource group myresourcegroup will be:
az postgres server configuration set --name max_connections --resource-group myresourcegroup --server mydemoserver --value 100
Related
I am using the Bitnami Postgres Docker container and noticed that my ORM which uses UUIDs requires the uuid-ossp extension to be available. After some trial and error I noticed that I had to manually install it using the postgres superuser since my custom non-root user created via the POSTGRESQL_USERNAME environment variable is not allowed to execute CREATE EXTENSION "uuid-ossp";.
I'd like to know what a script inside /docker-entrypoint-initdb.d might look like that can execute this command into the specific database, to be more precise to automate the following steps I had to perform manually:
psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
I think that something like this should work
PGPASSWORD=$POSTGRESQL_POSTGRES_PASSWORD psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
If you want to do it on startup you need to add a file to the startup scripts. Check out the config section of their image documentation: https://hub.docker.com/r/bitnami/postgresql-repmgr/
If you're deploying it via helm you can add your scripts in the postgresql.initdbScripts variable. values.yaml.
If the deployment is already running, you'll need to connect as the repmgr user not the Postgres user you created. That's default NOT a superuser for security purposes. This way most of your connections are not privileged.
For example I deployed bitnami/postgresql-ha via helm to a k8s cluster in a namespace called "data" with the release name "prod-pg". I can connect to the database with a privileged user by running
export REPMGR_PASSWORD=$(kubectl get secret --namespace data prod-pg-postgresql-ha-postgresql -o jsonpath="{.data.repmgr-password}" | base64 --decode)
kubectl run prod-pg-postgresql-ha-client \
--rm --tty -i --restart='Never' \
--namespace data \
--image docker.io/bitnami/postgresql-repmgr:14 \
--env="PGPASSWORD=$REPMGR_PASSWORD" \
--command -- psql -h prod-pg-postgresql-ha-postgresql -p 5432 -U repmgr -d repmgr
This drops me into an interactive terminal
$ ./connect-db.sh
If you don't see a command prompt, try pressing enter.
repmgr=#
I am trying to deploy PostgresDatabase on azure container instance.
To deploy on docker using bind mount(since Azure container Instance only support bind mount) i am using the below command, and it is deployed on docker.
docker run -d -p 5434:5432 --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -e PGDATA=/var/lib/postgresql/data/pgdata -v /home/ubuntu/volum:/var/lib/postgresql/data postgres
If i do something similar for deploying on Azure container Instance
az container create \
--resource-group $ACI_PERS_RESOURCE_GROUP \
--name postgreariesdb25-1 \
--location eastus \
--image postgres \
--dns-name-label $ACI_DNS_LABEL \
--environment-variables POSTGRES_PASSWORD=mysecretpassword PGDATA=/var/lib/postgresql/data/pgdata \
--ports 5432 \
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
--azure-file-volume-mount-path /var/lib/postgresql/data
I am getting the below message inside logs of Azure Container
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data/pgdata ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
2020-11-24 05:23:39.218 UTC [85] FATAL: data directory "/var/lib/postgresql/data/pgdata" has wrong ownership
2020-11-24 05:23:39.218 UTC [85] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
running bootstrap script ...
Volume Mount is required to have data in case of container restart.
This is a known error for mounting Azure File Share to Azure Container Instance. Currently, it does not support to change the ownership of the mount point. If you do not want to use other services, then you need to create a script to move the data to the mount point and the mount point should be a new folder that does not exist in the image. For you, the mount point /var/lib/postgresql/data exists in the image and contains the files that Postgresql depends on, then this point cannot be the mount point.
tell me how can I store PostgreSQL database data in an Azure Storage account. The PostgreSQL deploy to Azure Container Instance. When I restart the Azure Container instance all data disappears.
Dockerfile
FROM timescale/timescaledb:latest-pg12
ENV POSTGRES_USER=admin
POSTGRES_DB=dev-timescaledb
POSTGRES_PASSWORD=password
PGDATA=/var/lib/postgresql/data/pgdata
CMD ["postgres", "-c", "max_connections=500"]
Command for creating a Container Instance and mounting a Storage Account
az container create --resource-group test-env --name test-env --image
test-env.azurecr.io/timescale:latest --registry-username test-env
--registry-password "registry-password" --dns-name-label test-env --ports 5432 --cpu 2 --memory 5 --azure-file-volume-account-name testenv --azure-file-volume-account-key
'account-key'
--azure-file-volume-share-name 'postgres-data' --azure-file-volume-mount-path '/var/lib/postgresql/data'
but i got an error
data directory “/var/lib/postgresql/data/pgdata” has wrong ownership
The server must be started by the user that owns the data directory.
It caused by an existing issue that you cannot change the ownership of the mount point when you mount the Azure File Share to the Container Instance. And it cannot be solved currently. You can find the same issue in SO. I recommend you use the AKS with the disk volume and it will solve the problem for Postgres on persisting data.
What is the simplest way to configure parameter max_prepared_transactions=100 in docker kubernetes?
I am using image:
https://hub.docker.com/_/postgres/
Which has postgresql.conf file at /var/lib/postgresql/data
In my kubernetes deployment, that directory is externally mounted so I can't copy postgresql.conf using Dockerfile so I need to specify that parameter as a ENV parameter in Kubernetes .yml file, or changing the location of postgresql.conf file to, for example, /etc/postgresql.conf (how can I do this as a ENV parameter too?)
Thanks
You can set this config as runtime flag when you start your docker container. Something like this:
$ docker run -d postgres --max_prepared_transactions=100
If you're using the Postgres chart from Bitnami, you can add this to your values.yaml.
postgresqlExtendedConf:
{ "maxPreparedTransactions" : 100 }
I have installed gcloud/bq/gsutil command line tool in one linux server.
And we have several accounts configured in this server.
**gcloud config configurations list**
NAME IS_ACTIVE ACCOUNT PROJECT DEFAULT_ZONE DEFAULT_REGION
gaa True a#xxx.com a
gab False b#xxx.com b
Now I have problem to both run gaa/gab in this server at same time. Because they have different access control on BigQuery and Cloud Stroage.
I will use below commands (bq and gsutil commands):
Set up account
Gcloud config set account a#xxx.com
Copy data from bigquery to Cloud
bq extract --compression=GZIP --destination_format=NEWLINE_DELIMITED_JSON 'nl:82421.ga_sessions_20161219' gs://ga-data-export/82421/82421_ga_sessions_20161219_*.json.gz
Download data from Cloud to local system
gsutil -m cp gs://ga-data-export/82421/82421_ga_sessions_20161219*gz
If only run one account, it is not a problem.
But there are several accounts need to run on one server at same time, I have no idea how to deal with this case.
Per the gcloud documentation on configurations, you can switch your active configuration via the --configuration flag for any gcloud command. However, gsutil does not have such a flag; you must set the environment variable CLOUDSDK_ACTIVE_CONFIG_NAME:
$ # Shell 1
$ export CLOUDSDK_ACTIVE_CONFIG_NAME=gaa
$ gcloud # ...
$ # Shell 2
$ export CLOUDSDK_ACTIVE_CONFIG_NAME=gab
$ gsutil # ...