In an asp.net core application in Azure, I am using some stored procedures in a CosmosDB database.
I know how to create these stored procedures with C# code. But, I do not know when to deploy them (in my web app, by powershell, in a separate command line application...)
When bootstrapping your DocumentClient, upsert your stored procedures for unpartitioned collections to simplify deployment. For partitioned collections, you have to delete/create the stored procedures - which can be problematic if you are upgrading your system in flight.
Related
I am using Rundeck Docker Container. It was running well for 2 months and suddenly it crashed. I lost all the data wrt a project that I had created using CLI. Is there a way to change the default path to store all project related data including job definitions, resources etc?
Out of the box, Rundeck stores the project/jobs data on their internal H2 database, this database is only for testing purposes and probably will crash with a lot of data (storing projects at filesystem is deprecated right now), the best approach is to use a "real" database like MySQL, PostgreSQL, or Oracle, in that way Rundeck stores all project/jobs data on a robust backend.
Check this MySQL, PostgreSQL and Oracle Docker environment examples.
Of course, having a backup policy for your instance would be ideal to keep safe all your instance data.
I am planning to use Spring Batch on Azure as a serverless and looking to explore the Cosmos DB to store and manipulate the data.
Can I still use the Spring Batch Metadata tables with the COSMOS DB? If not, where to store the Spring Batch Metadata tables details?
How can we scheduled Batch Job on Azure? Is there any complete working example?
CosmosDB is not a supported database in Spring Batch, but you may be able to use one of the supported types, if the SQL variant is close enough.
Please refer to the Non-standard Database Types in a Repository section of the documentation for more details and a sample.
We are managing to use GCP CloudSQL for our PostgreSQL database,
at this moment one of our applications uses large objects and i was wondering how to perform a vacuumlo operation on such platforms (question might be valid for AWS RDS or any other cloud postgresql provider).
Does making custom queries/procedures to perform the same task is the only solution?
Since vacuumlo is a client tool, it should work just fine with hosted databases.
We are in the process of refactoring our databases. As part of that we have modified split our data which used to exist in single Postgres table into a new Postgres table schema and DynamoDB tables. What is the best way to migrate the data from the old schema into this new hybrid schema? We are thinking of writing a Java program to do it. But wanted to check if we can leverage some AWS offering to do this in an efficient way.
Check out the AWS Database Migration Service.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
See https://aws.amazon.com/dms/
We have a quite large (at least to us) database that has over 20.000 tables, which is running in an AWS EC2 Instance, but due to several reasons, we'd like to move it into an AWS RDS instance. We've tried a few different approaches for migrating into RDS but as per the data volume involved (2TB) and RDS' restrictions (users and permissions) and compatibility issues, we haven't been able to accomplish it.
Given the above facts, I was wondering if PostgreSQL actually supports something like mapping a remote schema into a database, if that would be possible we could try to tinker individual per schema migrations and, not the whole database at once, which would actually make the process less painful.
I've read about the IMPORT FOREIGN SCHEMA feature which seems to be supported from version 9.5 and, that seems to do the trick, but is there something like that for 9.4.9?
You might want to look at the AWS Database Migration tool, and the associated Schema Migration tool.
This can move data from an existing database into RDS, and convert - or at least report on what would need to be changed - the schema and associated objects.
You can run this in AWS, point it at your existing EC2-based database as the source, and use a new RDS instance as the destination.