I am working on a GitHub Actions pipeline to backup my Aurora PostgresSQL database on a weekly basis, right now I have a pipeline that will create a two databases but both of them are empty, I want to add some data in the 'Alpha' database, to try cloning it to the secondary database:
This is my current pipeline :
- name: Create Database Cluster for Alpha
run: |
aws rds create-db-cluster --db-cluster-identifier sample-cluster-alpha --engine aurora-postgresql \
--engine-version 8.0 --master-username postgres_user --master-user-password password \
--db-subnet-group-name evoya-alpha --vpc-security-group-ids sg-00a4bf18940752a90
- name: Create Database Cluster for Beta
run: |
aws rds create-db-cluster --db-cluster-identifier sample-cluster-beta --engine aurora-postgresql \
--engine-version 8.0 --master-username postgres_user --master-user-password password \
--db-subnet-group-name evoya-alpha --vpc-security-group-ids sg-00a4bf18940752a90
- name: Create Database instance for Alpha
run: |
aws rds create-db-instance \
--db-cluster-identifier sample-cluster-alpha
--db-instance-class db.r5.large \
--engine aurora-postgresql
- name: Create Database instance for Beta
run: |
aws rds create-db-instance \
--db-cluster-identifier sample-cluster-beta
--db-instance-class db.r5.large \
--engine aurora-postgresql
This works for creating two empty databases but I need to get some data in the 'Alpha' database and I am not able to do that from the GitHub Actions or 'PgAdmin' because I am getting a timed out error, but I have allowed all traffic in the security group.
If anyone can help, that would be tremendous.
Thanks.
Related
To make parameters using key vaults available for my azure webapp I've executed the following
identity=`az webapp identity assign \
--name $(appName) \
--resource-group $(appResourceGroupName) \
--query principalId -o tsv`
az keyvault set-policy \
--name $(keyVaultName) \
--secret-permissions get \
--object-id $identity
Now I want to create an azure postgres server taking admin-password from a key vault:
az postgres server create \
--location $(location) \
--resource-group $(ResourceGroupName) \
--name $(PostgresServerName) \
--admin-user $(AdminUserName) \
--admin-password '$(AdminPassWord)' \
--sku-name $(pgSkuName)
If the value of my AdminPassWord is here something like
#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)
I need the single quotes (like above) to get the postgres server created. But does this mean that the password will be the whole string '#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)' instead of the secret stored in <myKv> ?
When running my pipeline without the quotes (i.e. just --admin-password $(AdminPassWord) \) I got the error message syntax error near unexpected token ('. I thought that it could be consequence of the fact that I have't set the policy --secret-permissions get for the resource postgres server. But how can I set it before creating the postgres server ?
The expresssion #Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/) is used to access the keyvault secret value in azure web app, when you configure it with the first two commands, the managed identity of the web app will be able to access the keyvault secret.
But if you want to create an azure postgres server with the password, you need to obtain the secret value firstly and use it rather than use the expression.
For Azure CLI, you could use az keyvault secret show, then pass the secret to the parameter --admin-password in az postgres server create.
az keyvault secret show [--id]
[--name]
[--query-examples]
[--subscription]
[--vault-name]
[--version]
I am trying to deploy PostgresDatabase on azure container instance.
To deploy on docker using bind mount(since Azure container Instance only support bind mount) i am using the below command, and it is deployed on docker.
docker run -d -p 5434:5432 --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -e PGDATA=/var/lib/postgresql/data/pgdata -v /home/ubuntu/volum:/var/lib/postgresql/data postgres
If i do something similar for deploying on Azure container Instance
az container create \
--resource-group $ACI_PERS_RESOURCE_GROUP \
--name postgreariesdb25-1 \
--location eastus \
--image postgres \
--dns-name-label $ACI_DNS_LABEL \
--environment-variables POSTGRES_PASSWORD=mysecretpassword PGDATA=/var/lib/postgresql/data/pgdata \
--ports 5432 \
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
--azure-file-volume-mount-path /var/lib/postgresql/data
I am getting the below message inside logs of Azure Container
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data/pgdata ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
2020-11-24 05:23:39.218 UTC [85] FATAL: data directory "/var/lib/postgresql/data/pgdata" has wrong ownership
2020-11-24 05:23:39.218 UTC [85] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
running bootstrap script ...
Volume Mount is required to have data in case of container restart.
This is a known error for mounting Azure File Share to Azure Container Instance. Currently, it does not support to change the ownership of the mount point. If you do not want to use other services, then you need to create a script to move the data to the mount point and the mount point should be a new folder that does not exist in the image. For you, the mount point /var/lib/postgresql/data exists in the image and contains the files that Postgresql depends on, then this point cannot be the mount point.
tell me how can I store PostgreSQL database data in an Azure Storage account. The PostgreSQL deploy to Azure Container Instance. When I restart the Azure Container instance all data disappears.
Dockerfile
FROM timescale/timescaledb:latest-pg12
ENV POSTGRES_USER=admin
POSTGRES_DB=dev-timescaledb
POSTGRES_PASSWORD=password
PGDATA=/var/lib/postgresql/data/pgdata
CMD ["postgres", "-c", "max_connections=500"]
Command for creating a Container Instance and mounting a Storage Account
az container create --resource-group test-env --name test-env --image
test-env.azurecr.io/timescale:latest --registry-username test-env
--registry-password "registry-password" --dns-name-label test-env --ports 5432 --cpu 2 --memory 5 --azure-file-volume-account-name testenv --azure-file-volume-account-key
'account-key'
--azure-file-volume-share-name 'postgres-data' --azure-file-volume-mount-path '/var/lib/postgresql/data'
but i got an error
data directory “/var/lib/postgresql/data/pgdata” has wrong ownership
The server must be started by the user that owns the data directory.
It caused by an existing issue that you cannot change the ownership of the mount point when you mount the Azure File Share to the Container Instance. And it cannot be solved currently. You can find the same issue in SO. I recommend you use the AKS with the disk volume and it will solve the problem for Postgres on persisting data.
Our project uses TimescaleDB on PostgreSQL in Azure Container Instance. But when I change the settings, for example, max_connections in the var/lib/postgresql/data/postgresql.conf file, the value returns to the previous state after the reboot (I used vi editor to modify). I am building a container via Dockerfile.
FROM timescale/timescaledb:latest-pg12
ENV POSTGRES_USER=admin \
POSTGRES_DB=timescaledb \
POSTGRES_PASSWORD=test123# \
Is there some environment variable to set these values? What is the best way to store the database, is it is possible to transfer DB to the Azure Storage?
If you are managing Dockerfile yourself, then you can fix modify postgresql.conf there. For example, add the following line in the Dockerfile, which will set max_connections to 100:
sed -r -i "s/[#]*\s*(max_connections)\s*=\s*0/\1 = 100/" /usr/local/share/postgresql/postgresql.conf.sample
Also I suggest to check Azure documentation on configuring PostgreSQL server. For example, to update max_connections server configuration parameter of server mydemoserver.postgres.database.azure.com under resource group myresourcegroup will be:
az postgres server configuration set --name max_connections --resource-group myresourcegroup --server mydemoserver --value 100
The following command works when the --engine is set to postgres, but when I change it to aurora-postgresql (per the docs), I get an odd error:
aws rds create-db-instance \
--db-name mydb1 \
--db-instance-identifier mydb1 \
--db-instance-class db.r5.large \
--engine aurora-postgresql \
--master-username postgres \
--master-user-password XXXXX \
--availability-zone us-east-1a \
--db-subnet-group-name mydb-subnets-us-east-1 \
--allocated-storage 100
Error:
An error occurred (InvalidParameterCombination) when calling the CreateDBInstance operation:
Invalid storage type for DB engine manfred: aurora
What is manfred:?
And I've attempted all of the documented --storage-type values I see (standard, io1 and gp2), and they all generate the error:
An error occurred (StorageTypeNotSupported) when calling the CreateDBInstance operation:\n
Invalid storage type: XXX
I haven't been able to find a single example of creating an aurora postgres db from the CLI. Any advice from someone who has is appreciated.