I am trying to install pg-cron extension for Azure PostgreSQL Flexible server.
According to documentation found here:
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#postgres-13-extensions
pg_cron is available extension, but when I am trying to install it:
create schema cron_pg;
CREATE EXTENSION pg_cron SCHEMA cron_pg;
What I get is:
SQL Error [0A000]: ERROR: extension "pg_cron" is not allow-listed for "azure_pg_admin" users in Azure Database for PostgreSQL
Hint: to see the full allow list of extensions, please run: "show azure.extensions;"
When executing:
show azure.extensions;
pg_cron is missing:
address_standardizer,address_standardizer_data_us,amcheck,bloom,btree_gin,btree_gist,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,isn,lo,ltree,pageinspect,pg_buffercache,pg_freespacemap,pg_partman,pg_prewarm,pg_stat_statements,pg_trgm,pg_visibility,pgaudit,pgcrypto,pgrowlocks,pglogical,pgstattuple,plpgsql,postgis,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,tsm_system_rows,tsm_system_time,unaccent,uuid-ossp,lo,postgis_raster
What am I doing wrong?
You can tell pg_cron to run jobs in another database by updating the database column job in the jobs table.
For example:
UPDATE cron.job SET database = 'wordpress' WHERE jobname = 'wordpress-job';
Pretty late but this issue showed up when I was searching for same problem but with pg_trgm extension. After some looking around eventually realised you just need to update the database settings.
Go to Database in Azure Portal, then to Server parameters and search for azure.extensions parameter. You can then click on the list and enable/disable desired extensions (PG_CRON is available), the server will restart on save and then you will be able to enable the extensions in database.
Seems that the pg_cron extension is already enabled, by default, in the default 'postgres' database.
The reason why I was not seeing this is because I am not using the default 'postgres' database. I have created my own DB which I was connected to.
This actually does not resolve my problem, because I can't execute jobs from pg_cron across databases...
My company is evaluating Flyway for database releases. We have an AWS PostgreSQL version 11.2 database and I have installed Flyway Community Edition version 6.1.2.
I have successfully baselined the database and run several basic DDL scripts using Flyway migrate. However now I am testing a more complicated scenario in which I need to run multiple scripts as one migration but each script has to connect as a different PostgreSqL user. I have tried to do this by setting up two sql files each with their own config file as described here: https://flywaydb.org/documentation/scriptconfigfiles
Every time I run the migrate command I get a property error: "ERROR: Unknown script configuration property: flyway.user" or "ERROR: Unknown script configuration property: user", etc, etc.
For debugging purposes I removed one sql and config combo so that I now only have one file each. The files are named V2020.1.14.08.41.00__role_test1.sql and V2020.1.14.08.41.00__role_test1.sql.conf. I did confirm that any changes to that config file are being picked up by the migrate command. My config file contains the following properties (values changes for security reasons):
flyway.url=jdbc:postgresql://...
flyway.user=user1
flyway.password=password
flyway.schemas=test
I have also tried removing the flyway prefix:
url=jdbc:postgresql://...
user=user1
password=password
schemas=test
And removing the url parameter (both flyway.url and url) so the migration reads that value from the default flyway.conf file. Example:
user=user1
password=password
schemas=test
I get the errors every time. Anyone have any ideas? All help is greatly appreciated.
There is a typo in your code:
flyeay.user=user1
It should be:
flyway.user=user1
I'm trying to run migration on Azure DevOps Release Pipeline. Because I want to run my DB scripts automatically on every release. My release pipeline does not have source code, I just have compiled DLLs.
When I execute this command on my local machine, it runs successfully. How can I convert this command so I can use it with DLLs.
dotnet ef database update --project MyEntityFrameworkProject --context MyDbContext --startup-project MyStartupProject
Another approach is to generate migration script (a regular sql script) during build pipeline and make this script a part of your artifact. To do so run following command:
dotnet ef migrations script --output $(build.artifactstagingdirectory)\sql\migrations.sql -i
Note -i flag which makes this script runnable multiple times on the same database
Once you have this script as a part of your artifact you can run it on database in your release pipeline by using Azure SQL Database Deployment built in task.
Check this link for more info
EDIT: As #PartickBorkowicz pointed out there are some issues related to the fact that database is not available in a regular way from Build/Release Agent perspective. Here are some additional tips how to live without a database and connection string stored anywhere in your code.
1. Build pipeline
If you do nothing, an Build Agent will require database connection in order to run dotnet ef migrations script script. But there's one trick you can do which will allow you to work without database and connection string: IDesignTimeDbContextFactory
It's enough that you create class like this:
public class YourDesignTimeContextFactory : IDesignTimeDbContextFactory<YourDbContext>
{
public YourDbContext CreateDbContext(string[] args)
{
var databaseConnectionString = "Data Source=(LocalDB)\\MSSQLLocalDB;Initial Catalog=LocalDB;Integrated Security=True;Pooling=False";
var builder = new DbContextOptionsBuilder<YourDbContext>();
builder.UseSqlServer(databaseConnectionString);
return new YourDbContext(builder.Options);
}
}
Once it is present in your project (you don't need to register it anyhow) your Build Agent will be able to generate sql script with migrations logic without access to actual database
2. Release pipeline
Now, you're having sql script generated and being part of artifact from a build pipeline. Now, release pipeline is the time when you want this script to be run on actual database - you'll need a connection string to this database somehow. To do so in secure manner you should not store it anywhere in the code. A good way to do it is to keep password in Azure Key Vault. There's built in task in Azure Release pipelines called Azure Key Vault. This will fetch your secrets which you can use in next step: Azure SQL Database Deployment. You just need to setup options:
AuthenticationType: Connection String
ConnectionString: `$(SqlServer--ConnectionString)` - this value depends on your secret name in Key Vault
Deploy type: SQL Script File
SQL Script: $(System.DefaultWorkingDirectory)/YourProject/drop/migrations/script.sql - this depends how you setup your artifact in build pipeline.
This way you're able to generate migrations without access to database and run migrations sql without storing your connection string anywhere in the code.
If you don't want to include your source code with the artifacts you can use the following script:
set rootDir=$(System.DefaultWorkingDirectory)\WebApp\drop\WebApp.Web
set efPath=C:\Program Files\dotnet\sdk\NuGetFallbackFolder\microsoft.entityframeworkcore.tools\2.1.1\tools\netcoreapp2.0\any\ef.dll
dotnet exec --depsfile "%rootDir%\WebApp.deps.json" --additionalprobingpath %USERPROFILE%\.nuget\packages --additionalprobingpath "C:\Program Files\dotnet\sdk\NuGetFallbackFolder" --runtimeconfig "%rootDir%\WebApp.runtimeconfig.json" "%efpath%" database update --verbose --prefix-output --assembly "%rootDir%\AssemblyContainingDbContext.dll" --startup-assembly "%rootDir%\AssemblyContainingStartup.dll" --working-dir "%rootDir%"
It turns out you can get away with the undocumented dotnet exec command like the following example (assuming the web application is called WebApp):
Note that the Working Directory (hidden under Advanced) of this run command line task must be set to where the artifacts are (rootDir above).
Another option is to install Build & Release Tools extension and use the "Deploy Entity Framework Core Migrations from a DLL" task.
You can read more info here, here and here.
I developed a stored procedure in PostgreSQL in my local machine. And going to deploy it to PostgreSQL in MS Azure. But it's so strange, I did not see "Procedure" item inside a schema.
In Azure
In my local
I guess the user I have used in Azure does not have some permissions to view "Procedures".
If you know about this, please advise me.
Thank,
Currently I am trying to clone a cosmos db collection from one database to another database within the cosmos db. The API of the cosmos db is set to Mongo API.
I already tried to use Azure Data factory, but it looks like that there is no support for the Mongo API so far.
Has anyone an idea how to do this respective to efficiency, automation and performance?
Any ideas are appreciated.
You can use data Migration tool suggested by Microsoft to do the same.
There is no way to take a backup and import cosmosdb.
EDIT:
With the new Cosmic Clone tool, you can take a clone/backup with data/stored procedures/triggers/udf, etc. Read my blog on the same.
I already tried to use Azure Data factory, but it looks like that
there is no support for the Mongo API so far.
Actually, Cosmos DB Mongo API and SQL API are all belong to Azure Cosmos DB service.So , you still can create cosmos db linked service and dataset in the azure data factory for your database.
Then you could create copy activity to import data from one collection to another collection.
If you want to make it as an automation task, I suggest using following 2 ways to run the copy activity.
1.Azure Time Trigger Function.
2.Web job which is run in the background of Azure Web App.
Hope it helps you.Any concern, please feel free to let me know.
I used mongodump and mongorestore to copy my database (with mongodb version 4.0.9 installed). From the windows command line I ran the following commands from my mongodb bin directory (c:\Program Files\MongoDB\Server\4.0\bin in my case).
This will copy all the collections, including indexes, in the DB to the specified /out directory as .json files.
mongodump.exe /uri:URI /out:A_DIRECTORY_TO_DUMP_TO
I then ran the following command to take everything in the /out directory and write it to the target DB:
mongorestore.exe /uri:URI /dir:DIRECTORY_TO_RESTORE_FROM
NOTE: Before importing I also had to increase the throughput for the collection, otherwise I ran into rate limiting errors. If you've set throughput at the database level this may need to be changed.