How to copy tables using Azure Data Factory, without manual selection, preserving their names - azure-data-factory

I want to backup existing table storage 1:1 to new table storage. Be default I have to manually select tables, and on next page I have to manually set the name to existing one. How to make it automatic? See the screenshots:
Event if Select all is ticked, if new table appears later, it won't be included
Every table has a TableX name, I want them to have the same names like in source

There isn't a event trigger for table storage that when an new table created.
I would suggest you still use Select all option. Then create a schedule trigger to run the pipeline. That's a way to 'sync' the data between Table Storage 1 and 2. If your Table Storage is not very large, it could be a good way.
HTH.

Related

How do I copy data from one table to another, perform schema change and keep them in sync until cut off in Postgres?

I have workloads that have heavy schema changes and other ETL operations that are locking.
Before doing schema changes on my primary table, I would like to first copy the existing contents from the primary table on to a temporary table, then perform the schema change, then sync all new changes and once the "time is right" (cutoff?), do the cut over and have the temporary table become the primary table.
I know that I can use Triggers in postgres to sync data between two tables, and also use COPY to copy data from one table to another.
But I am not sure how can I can copy existing data first, then issue trigger to ensure no data is lost. Then also do the cut off so that the new table is primary.
What I am thinking is -
I issue a COPY table from primary table (TableA) to temp table TableB.
I then perform the schema change in TableB
I then setup Trigger from TableA to TableB for INSERT/UPDATE/DELETE
... Now I am not sure how can I cut off so TableB becomes TableA. I can use RENAME perhaps?
It feels like I can run into some lost changes between Step 1 and Step 2?
Basically I am trying to ensure no data between the three high level operations. Is there a better way to do this?

Dynamic table selection

Is it possible to dynamically select a table by name?
For example I have a table, and every time records are uploaded to it a backup is created first with the date appended to the table name.
table_20191108
table_20191109
table_20191110
table_20191111
What I would like to do is basically write some type of dynamic sql that always
select * from table_MAXDATE
I would like to do this so I can compare table to the most recent backup (e.g. table_20191111) in order to see what changed between the two tables.
haven't tried anything specific yet.

Is it possible to have database-wide table aliases?

I am about to model a PostgreSQL database, based on an Oracle database. The latter is old and its tables have been named after a 3-letter-scheme.
E.g. a table that holds parameters for tasks would be named TSK_PAR.
As I model the new database, I'd like to rename those tables to a more descriptive name using actual words. My problem is, that some parts of the software might rely on these old names until they're rewritten and adapted to the new scheme.
Is it possible to create something like an alias that's being used for the whole database?
E.g. I create a new task_parameters database, but add a TSK_PAR alias to it, so if a SELECT * FROM TSK_PAR is being used, it automatically refers to the new name?
Postgres has no synonyms like Oracle.
But for your intended use case, views should do just fine. A view that simply does select * from taks_parameters is automatically updateable (see here for an online example).
If you don't want to clutter your default schema (usually public) with all those views, you can create them in a different schema, and then adjust the user's search path to include that "synonym schema".
For example:
create schema synonyms;
create table public.task_parameters (
id integer primary key,
....
);
create view synonyms.task_par
as
select *
from public.task_parameters;
However, that approach has one annoying drawback: if a table is used by a view, the allowed DDL statements on it are limited, e.g. you can't drop a column or rename it.
As we manage our schema migrations using Liquibase, we always drop all views before applying "normal" migrations, then once everything is done, we simply re-create all views (by running the SQL scripts stored in Git). With that approach, ALTER TABLE statements never fail because there are not views using the tables. As creating a view is really quick, it doesn't add overhead when deploying a migration.

Best practices for performing a table swap in Redshift

We're in the process of running a handful of hourly scripts on our Redshift cluster which build summary tables for data consumers. After assembling a staging table, the script then runs a transaction which deletes the existing table and replaces it with the staging table, as such:
BEGIN;
DROP TABLE IF EXISTS public.data_facts;
ALTER TABLE public.data_facts_stage RENAME TO data_facts;
COMMIT;
The problem with this operation is that long-running analysis queries will place an AccessShareLock on public.data_facts, preventing it from being dropped and thrashing our ETL cycle. I'm thinking a better solution would be one which renames the existing table, as such:
ALTER TABLE public.data_facts RENAME TO data_facts_old;
ALTER TABLE public.data_facts_stage RENAME TO data_facts;
DROP TABLE public.data_facts_old;
However, this approach presupposes that 1) public.data_facts exists, and 2) public.data_facts_old does not exist.
Do you know if there's a way to conduct this operation safely in SQL, without relying on application logic? (eg. something like ALTER TABLE IF EXISTS).
I haven't tried it but looking at the documentation of CREATE VIEW it seems that this can be done with late-binding views.
The main idea would be a view public.data_facts that users interact with. Behind the scenes, you can load new data and then swap the view to “point” to the new table.
Bootstrap
-- load data into public.data_facts_v0
CREATE VIEW public.data_facts AS
SELECT * from public.data_facts_v0 WITH NO SCHEMA BINDING;
Update
-- load data into public.data_facts_v1
CREATE OR REPLACE VIEW public.data_facts AS
SELECT * from public.data_facts_v1 WITH NO SCHEMA BINDING;
DROP TABLE public.data_facts_v0;
The WITH NO SCHEMA BINDING means the view will be late-binding. “A late-binding view doesn't check the underlying database objects, such as tables and other views, until the view is queried.” This means the update can even introduce a table with renamed columns or a completely new structure.
Notes:
It might be a good idea to wrap the swap operations into a transaction to make sure we don't drop the previous table if the VIEW swap failed.
You can add a new load time timestamp encode runlength default getdate() column to your target table, and make your ETL do this:
INSERT INTO public.data_facts
SELECT * FROM public.data_facts_staging;
DELETE FROM public.data_facts
WHERE load_time<(select max(load_time) from public.data_facts);
DROP TABLE public.data_facts_staging;
note: public.data_facts_staging should have exactly the same structure as public.data_facts except that the last column of public.data_facts is load_time, so that on insert it will be populated with the current timestamp.
The only implication is that it would require extra disk space for a moment between you insert new rows and delete the old rows, and load_time has to be always the last column. Also you have to vaccum table every time you do this.
Another good thing about this is that if your ETL fails and staging table is empty or there is no staging table you won't lose your data. In the pure SQL scenario of swapping tables with DDL you're not protected from dropping the target table when staging table is missing. In the suggested scenario if no new rows are inserted the delete statement deletes nothing (there are no rows less than max load time), so worst case is just having the old version of data.
p.s. there is a command that instead of insert ... select ... just changes the pointer from staging to target table (alter table ... append from ...) but it requires the same type of lock as alter table I guess, so I don't suggest this

Create TABLE in many Postgres schemas with a single command

I have two schemas in my Postgres BOOK database, MSPRESS and ORELLY.
I want to create the same table in two schemas:
CREATE TABLE MSPRRESS.BOOK(title TEXT, author TEXT);
CREATE TABLE ORELLY.BOOK(title TEXT, author TEXT);
Now, I want to create the same table in all my schemas with a single command.
To accomplish this, I thought about event triggers available in Postgres 9.3 (http://www.postgresql.org/docs/9.3/static/event-triggers.html). Intercepting CREATE TABLE command by my event trigger, I thought to determine name of table and schema in which it is created and just repeat the same command for all available schemas. Sadly, event trigger procedure does not get name of table being created.
Is there a way to 'real-time' synchronization of Postgres schema?
Currently only TG_EVENT and TG_TAG is available from an event trigger, but this feature will likely be expanded. In the meantime, you can query information_schema for differences and try to add every table, where its missing; but don't forget that this synchronization will also trigger several event triggers, so you should do it carefully.
But if you just want to build several schemas with the same structure (without further synchronization), you could just write schema-less queries to build them & run it on every schema one-by-one, after changing search path with SET search_path / SET SCHEMA.