Azure configure backup doesn't see PostgreSQL - postgresql

I am trying to configure a backup of the PostgreSQL server using the Azure backup center.
I created Vault, a resource group. But when I am trying to configure backup using Vault and Backup policy it can't see the PostgreSQL database server in the resources.
I appreciate any help in troubleshooting the issue.
enter image description here

Please follow the official document troubleshooting the issue:
Select or Create a Backup Policy that defines the backup schedule.
+ Add and select PostgreSQL database
Note:
The backup service validates the necessary access permissions to read secret details from the key vault and connect to the database
if access permission is found missing, it will display an error message- role assignment not done, make sure to check and Re-validate role assignment done or not.
Reference:
https://learn.microsoft.com/en-us/azure/backup/backup-azure-database-postgresql-overview#grant-access-on-the-azure-postgresql-server-and-key-vault-manually
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-backup-restore

Related

Copy data from Postgres DB (GCP Project A) to another Postgres DB (GCP Project B)

I would be happy to get your help / feedback re data load.
Goal:
Load source data from a Postgres database, which is located in GCP project A to another Postgres database, which is located in GCP project B.
Challenge:
Get a connection (I have an IAM account with sufficient rights to run a COPY TO / COPY FROM command) to the Postgres DB in GCP Project A and copy the table either to a CSV or create a dump that can be used in order to be inserted to another Postgres DB in GCP Project B.
How do I connect to the database (e.g. if I create a key, where shall I store the json keyfile and would that approach even be feasible?) with this IAM email account?
Other ways I've researched were to use psycopg2 (thus I could use the function cursor.copy_expert (which doesn’t need any superuser right or Postgres user credentials and copy the data), but I didn’t succeed in connecting to the database with psycopg2 due to challenges with cloud proxy.
Another idea was to use pg_dump or gcloud sql export csv.
I would be curious if some of you were facing a similar challenge and how did you solve it and what might be the best way/practice
You can have a try out database migration service. You can set up a continuous migration configuration and use Cloud SQL for PostgreSQL.
Hello after a lot of searching I've come to these solutions:
If you have continuous copy, you need to use the database migration service, check this documentation.
If you have one shot copy:
you can restore your instance, see the bottom page of this documentation
you can create a bucket and backup your instance on it, then import it from the other project

How to import sql file in Google SQL with binary mode enabled?

I have a database that is giving error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected.
I'm including importing the database through the console with gcloud sql import sql mydb gs://my-path/mydb.sql --database=mydb but I don't see in the documentation any flags for binary mode. Is it possible at all?
Optional - is there a way to set this flag when importing through the MySQL Workbench. I haven't seen anything about it there too, but may be I'm missing some setting or something. If there is way to set that flag, then I can import my database through MySQL Workbench.
Thank you.
Depending where the source database is hosted, on Cloud SQL or on an on-premise environment, the proper flags are set during the export, so the dump file is compatible with the target database.
Since you would like to import a file that has been exported from an on-premise environment, mysqldump is the suggested way to perform the export.
First, create a dump file as suggested in the documentation. Make sure to pay attention to the following 2 points:
Do not export customer-created MySQL users. This will cause the import to the new instance to fail. Instead, manually create the MySQL users you wish to.
Make sure that you have configured the appropriate flags in order to make sure that the dump file will contain all the necessary details you need. Eg triggers, stored procedures etc.
Then, create a Cloud Storage Bucket and upload the dump file to the bucket.
Before proceeding with the import, grant the Storage Object Admin role to the service account of the target Cloud SQL instance. You may do that with the following command:
gsutil iam ch serviceAccount:[SERVICE-ACCOUNT]:objectAdmin gs://[BUCKET-NAME]
You may locate the aforementioned Service Account in the Cloud SQL instance Overview, or by running the following command:
gcloud sql instances describe [INSTANCE_NAME]
The service account will be mentioned at the serviceAccountEmailAddress field.
Now you are able to do the import either from Console, or using the gcloud command or a REST API.
More details in Google documentation
Best Practices for importing/exporting data

GCP connect other user psql instance

We are a student group that wants to make a simple PostgreSQL project on google cloud.
I create database and tables etc. but I can't solve how my team-mates connect that database?
You can create users in the console to allow your team-mates to connect. Please follow the steps in the link for Creating a User.
As for the error message: you need to enable Cloud SQL Admin API.

migrating from db2 Express-C to Developer version

I have a backup file from db2 express-c 11.1 version and I'd like to restore it into db2 developer version (both on Windows machine). The RESTORE completed successfully and I can list the tables from the db2 command line
db2 list tables for schema XYZ
but when I'm trying to access the data I'm getting the following error message
SQL0551N The statement failed because the authorization ID does not have the
required authorization or privilege to perform the operation. Authorization
ID: "DB2USER". Operation: "SELECT". Object: "XYZ.Table1". SQLSTATE=42501
I logged in as the user who RESTORE the database. WHat's the issue here?
When restoring a DB2-luw database backup to a different Db2-instance , it is wise to first set a Db2-registry variable on the target Db2-instance before performing the database-restore. The account performing the Db2-restore will then be granted SECADM, DBADM, DATAACCESS, and ACCESSCTRL authorities on the restored database.
db2set DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=on
More information here.
Then perform the Db2-restore command.
If you have not taken this action then you can also use manual GRANT statements (on database level and object level) to adjust to the new Db2-instance, but for best results you should use the registry variable above.
You can also use the TRANSFER OWNERSHIP statement at various levels to achieve the security model. Details here. This is useful when the previous owner was the Db2-instance and the restored-database is in a different Db2-instance than the backed-up-database.

Rename the Amazon RDS master username

Changing the password is easily done through the console. Is there any way to change the master username after creation on RDS for PostgreSQL? If so, how?
You can't change username. You can check the following links that describe how to change master password and if Amazon adds the ability to change username you will find there:
Try to find at AWS CLI for RDS:
modify-db-instance --db-instance-identifier <value> --master-user-password (string)
--master-user-password (string)
The new password for the DB instance master user. Can be any printable
ASCII character except "/", """, or "#".
Changing this parameter does not result in an outage and the change is
asynchronously applied as soon as possible. Between the time of the
request and the completion of the request, the MasterUserPassword
element exists in the PendingModifiedValues element of the operation
response. Default: Uses existing setting
Constraints: Must be 8 to 41 alphanumeric characters (MySQL, MariaDB,
and Amazon Aurora), 8 to 30 alphanumeric characters (Oracle), or 8 to
128 alphanumeric characters (SQL Server).
The Amazon RDS Command Line Interface (CLI) has been deprecated. Instead, use the AWS CLI for RDS.
Via the AWS Management Console, choose the instance you need to reset the password for, click ‘Modify’ then choose a new master password.
If
you don’t want to use the AWS Console, you can use the
rds-modify-db-instance command (as per Amazon’s documentation for RDS)
to reset it directly, given the AWS command line tools:
rds-modify-db-instance instance-name --master-user-password
examplepassword
No. As of April 2019 one cannot reset the 'master username'.
You cannot do it directly. However you can use the database migration service from AWS:
https://aws.amazon.com/dms/
Essentially you define the current database instance as your source and the new database with the correct username as your target of the migration.
This way you migrate the data from one to another database instance. As such you can change all properties including the username.
This approach has some drawbacks:
You need to configure the migration. Which takes a bit of time.
The data is migrated. This may lead unexpected behavior since not everything is eventually migrated (e.g. views etc.)
It depends how you setup everything you may experience a downtime.
Though this may not be ideal for every use-case, I did find a workaround that allows for changing the username of the master user of an AWS RDS DB.
I am using PgAdmin4 with PostgreSQL 14 at the time of writing this answer.
Login with the master user you want to change the name of
Create a new user with the following privileges and membership
Privileges and Membership
Can login - yes
Superuser - no (not possible with a managed AWS RDS DB instance, if you need complete superuser access DO NOT use a managed AWS RDS DB)
Create roles - yes
Create databases - yes
Inherit rights from the parent roles - yes
Can initiate streaming replication and backups - no (again, not possible directly without superuser permission)
Be sure to note the password used, as you will need to access this new account at least 1 time to complete the name change
Register a server with the credentials created in step 2. Disconnect from the server but do NOT remove it! Connect to the new server created
Expand Login/Group Roles and click on the master user whom you are changing the name
Click the edit icon, edit the name, and save.
Right click the server with the master username, select Properties
Update the name under the General tab if desired
Update the username under the Connection tab to whatever you changed the master username above
Save and reconnect to the server with the master user
You have successfully updated the master user's name on a managed AWS RDS DB instance, proud of you!
As #tdubs's answer states, it is possible to change the master username for a Postgres DB instance in AWS RDS. Whether it is advisable – probably not.
Here are the SQL commands you need to issue:
Create a temporary user with the CREATEROLE privilege (while being logged in with the old master user)
CREATE ROLE temp_master PASSWORD '<temporary password>' LOGIN CREATEROLE;
Now connect to the database with the temp_master user
ALTER ROLE "<old_master_username>" RENAME TO "<new_master_username>";
-- NOTICE: MD5 password cleared because of role rename
ALTER ROLE "<new_master_username>" PASSWORD '<new password>';
Now connect to the database with the <new_master_username> user in order to clean up the temporary role
DROP ROLE temp_master;
And you're done!
Warning
AWS RDS does not know that the master username has been changed, so it will keep displaying the old one and assumes that is still the master username.
This means that if you use the AWS CLI or website to update the master password, it will have no effect.
And when connecting to the database with psql you'll see:
WARNING: role "<old_master_username>" does not exist