Error restoring db in Azure PostgreSQL Flexible server: 'extension "azure" is not allow-listed' - postgresql

I'm using pgAdmin to backup/restore a database from one Azure PostgreSQL Flexible server to another. The source server was created last year, the destination server is new.
The restore process fails early with the error:
ERROR: extension "azure" is not allow-listed for "azure_pg_admin" users in Azure Database for PostgreSQL
I came across this post https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/introducing-ability-to-allow-list-extensions-in-postgresql/ba-p/3219124 announcing recent changes to PostgreSQL Flexible Server. If I'm reading this correctly, my new database server is affected by this change and I need to allow specific extensions under the "azure.extensions" server parameter.
In the backup file I can see:
CREATE EXTENSION IF NOT EXISTS azure WITH SCHEMA public;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" WITH SCHEMA public;
And in Azure Portal I can see "UUID-OSSP" under the new "azure.extensions" server parameter, though there's nothing called just "azure". I enabled UUID-OSSP but the restore process still fails with the same error.
What am I missing here?

It's suggestable to install TimescaleDB that is having the package of supportive extension.
To learn abot TimescaleDB. (https://docs.timescale.com/timescaledb/latest/)
Change the Postgres's parameters for "shared_preload_libraries"
After creating extension
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
Follow the entire procedure from the below link.
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions

Related

Database Migration Service - Aurora PostgreSql -> CloudSQL fails with confusing error( Unable to drop table postgres)

Attempting to migrate from AWS Aurora PostgreSQL 13.4 to Google Cloud SQL PostgreSQL 13.
Migration job gives this error:
finished setup replication with errors: failed to drop database "postgres": generic::unknown: retry budget exhausted (10 attempts): pq: database "postgres" is being accessed by other users
The user the DMS is using only has SELECT permissions on the source database(Aurora)
I'm very confused as to why it is trying to drop the "postgres" database at all. Not sure if it is trying to drop the database in the source or destination. Not sure what I'm missing.
I've installed necessary extensions in the destination DB(pg_cron). No difference.
User in source database has SELECT on all tables/schemas outlined in the docs(including pglogical schema)
I've tried various PostgreSQL versions in the destination cluster( 13.x, 14.x). No difference.
The "Test connection" tool when creating the migration job, shows no errors. (There is a warning about a few tables not having Primary keys, but that's it.)

Installation of pg_cron on Azure Flexible PostgeSQL

I am trying to install pg-cron extension for Azure PostgreSQL Flexible server.
According to documentation found here:
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#postgres-13-extensions
pg_cron is available extension, but when I am trying to install it:
create schema cron_pg;
CREATE EXTENSION pg_cron SCHEMA cron_pg;
What I get is:
SQL Error [0A000]: ERROR: extension "pg_cron" is not allow-listed for "azure_pg_admin" users in Azure Database for PostgreSQL
Hint: to see the full allow list of extensions, please run: "show azure.extensions;"
When executing:
show azure.extensions;
pg_cron is missing:
address_standardizer,address_standardizer_data_us,amcheck,bloom,btree_gin,btree_gist,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,isn,lo,ltree,pageinspect,pg_buffercache,pg_freespacemap,pg_partman,pg_prewarm,pg_stat_statements,pg_trgm,pg_visibility,pgaudit,pgcrypto,pgrowlocks,pglogical,pgstattuple,plpgsql,postgis,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,tsm_system_rows,tsm_system_time,unaccent,uuid-ossp,lo,postgis_raster
What am I doing wrong?
You can tell pg_cron to run jobs in another database by updating the database column job in the jobs table.
For example:
UPDATE cron.job SET database = 'wordpress' WHERE jobname = 'wordpress-job';
Pretty late but this issue showed up when I was searching for same problem but with pg_trgm extension. After some looking around eventually realised you just need to update the database settings.
Go to Database in Azure Portal, then to Server parameters and search for azure.extensions parameter. You can then click on the list and enable/disable desired extensions (PG_CRON is available), the server will restart on save and then you will be able to enable the extensions in database.
Seems that the pg_cron extension is already enabled, by default, in the default 'postgres' database.
The reason why I was not seeing this is because I am not using the default 'postgres' database. I have created my own DB which I was connected to.
This actually does not resolve my problem, because I can't execute jobs from pg_cron across databases...

How to create database template on azure postgresql service

While using the database service for postgresql om azure, it looks like it is not possible to create a custom template database.
What I want to achieve, is that a regular account, csn create new databases with a specific extension enabled.
The creation can be delegated, but the enabling of the extension fails in my test for all but the initial database admin account.
I've just tried the following in Azure Database for PostgreSQL and it worked (please bear with me as I walk you through my steps in Azure).
In Azure CLI (bash) followed steps in Quick Start for Azure PostgreSQL to create a new resource group > create a PG server in it > create a new DB (originaldb) on that PG server. All worked just fine.
Then I enabled a earthdistance and cube (pre-req for earthdistance) extensions for orginaldb (the DB I created in step 1).
Then I used CLI to create another DB (dbclone) using orginaldb as a template:
CREATE DATABASE dbclone TEMPLATE originaldb;
It worked just fine and cube and earthdistance extensions are enabled in dbclone DB.
Now on to trying it with another PG user: I created a PG user (user1) and granted this user DB creation privilege.
Then I logged on to my server as user1 and created another DB from CLI using the same command:
CREATE DATABASE dbclone2 TEMPLATE originaldb;
It worked too and I see that cube and earthdistance extensions are enabled in dbclone2 database.
Is that what you're trying to do? Are you hitting errors following the same steps or you're trying to do something different?

How can i check the template for postgis in postgres ubuntu

I am following this tutorial
http://technobytz.com/install-postgis-postgresql-9-3-ubuntu.html
and i created db with this command
createdb test_db -T template_postgis2.1
but i get this error
test_db2=# select postgis_version();
ERROR: function postgis_version() does not exist
LINE 1: select postgis_version();
This works if use
create extension postgis
i want to know that is that ok or i have error. because i made the template before. Didn't that template automatically make the db as postgis
According to the official documentation on the topic, you have to create the extension in each new database you create. Why? This has to do with a change in the way a database is PostGIS-enabled in PostgreSQL-9.1+ and PostGIS-2+. Previously, there were a series of scripts that had to be run to load the functions, types, and other features of PostGIS into a database. Consequently, the best practice was to create a template database (template_postgis, etc.), run all the scripts against that template, and create each new PostGIS-enabled database against that template. In newer versions of PostgreSQL (9.1+), you can enabled PostGIS support within a new database by simply executing the command CREATE EXTENSION postgis; as such, you should skip the template step entirely.
So to sum up:
CREATE EXTENSION postgis; is the way to go for PostgreSQL-9.1+ and PostGIS-2+
Making a template database is the way to go for prior versions of PostgreSQL or PostGIS.
I hope that helps clear it up!

POSTGRESQL 9.1 backup and restore to 8.4

I'm trying to upload a database, which I developed locally, into our development server.
I installed PostgreSQL 9.1 on my machine and the development server uses 8.4.
When trying to restore the database to 8.4 using the dump file created by 9.1 I get the error:
pg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near "EXTENSION"
LINE 1: CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalo...
and a quick research tells me that "EXTENSION" doesn't exist prior to 9.1.
I'm not really sure I should look for an option in pg_dump that ignores "extensions" as the database I'm trying to upload relies on the PostGIS extension for most of data.
While upgrading the development server and installing PostGIS in the dev server is an option, I'd like to know of a different route, one wherein I do not need to edit anything on the server while maintaining the functions of the database I developed.
Of course other workarounds are welcomed, my sole aim in uploading my database to the server is to reduce the amount of reconfiguration I have to do on my project whenever I need to deploy something for our team.
This is an old post but I had the same problem today and there is a better more reliable way of loading a PG 9.1 db into a PG 8.4 server. The method proposed by Craig will fail on the target machine because the PLPGSQL language will not be created.
pg_dump -Upostgres -hlocalhost > 9.1.db
replace this line
CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
with this line
CREATE LANGUAGE plpgsql;
delete this line or comment it out
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
you can use sed to make the changes
Often it is not possible to upgrade an 8.4 server because of application dependencies.
Backporting databases can be painful and difficult.
You could try using 8.4's pg_dump to dump it, but it'll probably fail.
You'll probably want to extract the table and function definitions from a --schema-only dump text file, load them into the old DB by hand, then do a pg_dump --data-only and restore that to import the data.
After that, if you're going to continue working on your machine too, install PostgreSQL 8.4 and use that for further development so you don't introduce more incompatibilities and so it's easy to move dumps around.
In your position I'd just upgrade the outdated target server to 9.1.