Installation of pg_cron on Azure Flexible PostgeSQL - azure-postgresql

I am trying to install pg-cron extension for Azure PostgreSQL Flexible server.
According to documentation found here:
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#postgres-13-extensions
pg_cron is available extension, but when I am trying to install it:
create schema cron_pg;
CREATE EXTENSION pg_cron SCHEMA cron_pg;
What I get is:
SQL Error [0A000]: ERROR: extension "pg_cron" is not allow-listed for "azure_pg_admin" users in Azure Database for PostgreSQL
Hint: to see the full allow list of extensions, please run: "show azure.extensions;"
When executing:
show azure.extensions;
pg_cron is missing:
address_standardizer,address_standardizer_data_us,amcheck,bloom,btree_gin,btree_gist,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,isn,lo,ltree,pageinspect,pg_buffercache,pg_freespacemap,pg_partman,pg_prewarm,pg_stat_statements,pg_trgm,pg_visibility,pgaudit,pgcrypto,pgrowlocks,pglogical,pgstattuple,plpgsql,postgis,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,tsm_system_rows,tsm_system_time,unaccent,uuid-ossp,lo,postgis_raster
What am I doing wrong?

You can tell pg_cron to run jobs in another database by updating the database column job in the jobs table.
For example:
UPDATE cron.job SET database = 'wordpress' WHERE jobname = 'wordpress-job';

Pretty late but this issue showed up when I was searching for same problem but with pg_trgm extension. After some looking around eventually realised you just need to update the database settings.
Go to Database in Azure Portal, then to Server parameters and search for azure.extensions parameter. You can then click on the list and enable/disable desired extensions (PG_CRON is available), the server will restart on save and then you will be able to enable the extensions in database.

Seems that the pg_cron extension is already enabled, by default, in the default 'postgres' database.
The reason why I was not seeing this is because I am not using the default 'postgres' database. I have created my own DB which I was connected to.
This actually does not resolve my problem, because I can't execute jobs from pg_cron across databases...

Related

Error restoring db in Azure PostgreSQL Flexible server: 'extension "azure" is not allow-listed'

I'm using pgAdmin to backup/restore a database from one Azure PostgreSQL Flexible server to another. The source server was created last year, the destination server is new.
The restore process fails early with the error:
ERROR: extension "azure" is not allow-listed for "azure_pg_admin" users in Azure Database for PostgreSQL
I came across this post https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/introducing-ability-to-allow-list-extensions-in-postgresql/ba-p/3219124 announcing recent changes to PostgreSQL Flexible Server. If I'm reading this correctly, my new database server is affected by this change and I need to allow specific extensions under the "azure.extensions" server parameter.
In the backup file I can see:
CREATE EXTENSION IF NOT EXISTS azure WITH SCHEMA public;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" WITH SCHEMA public;
And in Azure Portal I can see "UUID-OSSP" under the new "azure.extensions" server parameter, though there's nothing called just "azure". I enabled UUID-OSSP but the restore process still fails with the same error.
What am I missing here?
It's suggestable to install TimescaleDB that is having the package of supportive extension.
To learn abot TimescaleDB. (https://docs.timescale.com/timescaledb/latest/)
Change the Postgres's parameters for "shared_preload_libraries"
After creating extension
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
Follow the entire procedure from the below link.
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions

PGAdmin restore remote database [duplicate]

This question already has answers here:
Export and import table dump (.sql) using pgAdmin
(6 answers)
Closed 1 year ago.
Let I first state that I am not a DBA-guy but I do have a question regarding restoring remote databases using PG Admin.
I have this PG Admin tool (v4.27) running in a Docker container and I use this portal to maintain two separate Postgress databases, both running in a Docker container as well. I installed PG Agent in both database containers and run scheduled daily backup's, defined via PG Admin and stored in the container of each corresponding databases. So far so good.
Now I want to restore one of these databases by using the latest daily backup file (*.sql), but the Restore Dialog of PG Admin only looks for files stored locally (the PG Admin container)?
Whatever I tried or searched for on the internet, to me it seems not possible to show a list of remote backup files in PG Admin or run manually a remote SQL file. Is this even possible in PG Admin? Running psql in the query editor is not possible (duh ...) and due to not finding the remote SQL-restore file I have no clue how to run this code within PG Admin on the remote corresponding database container.
The one and only solution so far I can think of, is scheduling a restore which has no calendar and should be triggered manually when needed, but it's not the prettiest solution.
Do I miss something or did I overlook the right documentation or have I created a silly, unmaintainable solution?
Thanks in advance for thinking along and kind regards,
Aad Dijksman
You cannot restore a plain format dump (an SQL script) with pgAdmin. You will have to use psql, the command line client.
COPY statements and data are mixed in such a dump, and that would make pgAdmin choke.
The solution by #Laurenz Albe points out that it is best to use the command line psql here, and that would be my first go-to.
However, if for whatever reason you don't have access to the command line and are only able to connect to this database via pgadmin, there is another solution which you can find here:
Export and import table dump (.sql) using pgAdmin
I recommend looking at the solution by Tomas Greif.

AWS DMS Streaming replication : Logical Decoding Output Plugins(test_decoding) not accessible

I'm trying to migrate a PostgreSQL DB persisted on cloud (on DO droplet) to RDS using AWS Database Migration Service (DMS).
I've successfully configured the replication instance and endpoints.
I've created a task with Migrate existing data and replicate ongoing changes. When I start the task it shows some error ERROR: could not access file "test_decoding": No such file or directory.
I've tried to create a replication slot manually on my DB console it throws the same error.
I've followed the procedures which was suggested on the DMS documentation for Postgres
I'm using PostgreSQL 9.4.6 on my source endpoint.
I presume that the problem is the output plugin test_decoding was not accessible to do the replication.
Please assist me to resolve this. Thanks in advance!
You must install postgresql-contrib additional supplied modules on Your source endpoint.
If it is installed, make sure, directory where test_decoding module located is the same with directory where PostgreSQL expect it.
In *nix, You can check module directory by command:
pg_config --pkglibdir
If it is not the same, copy module, or make symlink, or some other solution You prefer.

How can i check the template for postgis in postgres ubuntu

I am following this tutorial
http://technobytz.com/install-postgis-postgresql-9-3-ubuntu.html
and i created db with this command
createdb test_db -T template_postgis2.1
but i get this error
test_db2=# select postgis_version();
ERROR: function postgis_version() does not exist
LINE 1: select postgis_version();
This works if use
create extension postgis
i want to know that is that ok or i have error. because i made the template before. Didn't that template automatically make the db as postgis
According to the official documentation on the topic, you have to create the extension in each new database you create. Why? This has to do with a change in the way a database is PostGIS-enabled in PostgreSQL-9.1+ and PostGIS-2+. Previously, there were a series of scripts that had to be run to load the functions, types, and other features of PostGIS into a database. Consequently, the best practice was to create a template database (template_postgis, etc.), run all the scripts against that template, and create each new PostGIS-enabled database against that template. In newer versions of PostgreSQL (9.1+), you can enabled PostGIS support within a new database by simply executing the command CREATE EXTENSION postgis; as such, you should skip the template step entirely.
So to sum up:
CREATE EXTENSION postgis; is the way to go for PostgreSQL-9.1+ and PostGIS-2+
Making a template database is the way to go for prior versions of PostgreSQL or PostGIS.
I hope that helps clear it up!

Rename SQL Azure database?

How can i rename the database in sql Azure?
I have tried Alter database old_name {MODIFY NAME = new_name} but not worked.
Is this feature available in SQL Azure or not?
Just so people don't have to search through the comments to find this... Use:
ALTER DATABASE [dbname] MODIFY NAME = [newdbname]
(Make sure you include the square brackets around both database names.)
Please check that you've connected to master database and you not trying to rename system database.
Please find more info here: https://msdn.microsoft.com/en-US/library/ms345378.aspx
You can also connect with SQL Server Management Studio and rename it in Object Explorer. I just did so and the Azure Portal reflected the change immediately.
Do this by clicking on the database name (as the rename option from the dropdown will be greyed out)
Connect with SQL Server Management Studio to your Azure database server, right-click on the master database and select 'New Query'. In the New Query window that will open type ALTER DATABASE [dbname] MODIFY NAME = [newdbname].
It's Very simple for now - Connect to DB via SQL Management Studio and Just rename as you generally doing for DB [Press F2 on DB name]. It will allow you to do this and it will immediately reflect the same.
I can confirm the
ALTER DATABASE [oldname] MODIFY NAME = [newname];
works without connecting to master first BUT if you are renaming a restored Azure database; don't miss the space before the final hyphen
ALTER DATABASE [oldname_2017-04-23T09 -17Z] MODIFY NAME = [newname];
And be prepared for a confusing error message in the Visual Studio 2017 Message window when executing the ALTER command
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
You can easily do it from SQL Server Management Studio, Even from the community edition.