Union all tables with same names from different schemas in Redshift - amazon-redshift

I am looking to create a union all on tables with same names in different schema.
Is there a way to do this in redshift other than a brute force method of naming individual tables and columns in the union all statement.
example:
schema z table a,
schema y table a,
schema x table a
schema z table b,
schema y table b
schema y table c,
schema x table c
tables a columns - d, e, f,g , h
table b columns - d,e,g,h,i
table c columns - d,e,f

I'd go with writing a Lambda function (or just code on your computer) to inspect the system tables for the existence of the tables in question and then find the columns of each table. Then this code would compose the SQL.
You could also likely do this in a stored procedure but this is likely more work to get running. If you need many people to be able to use from inside the DB then it is worth the effort otherwise I'd go simple.

Related

What is the most efficient way to create a table B identical to table A but with different column ordering?

I have a PG table A with columns (x,y,z), now I needed and added a BIGSERIAL id to it but then A ends being like A(x,y,z,id) and I would rather have the columns in the following order: A(id,x,y,z).
I know the only way to accomplish this, is by copying A into a new temporary table B with the desired order, then drop A and rename B to A. I would use CREATE TABLE B AS SELECT ... FROM A ... but do I get all the same indexes and constraints as in A? what's the most streamlined simplest way to do this?

How to copy a Specific Partitioned Table

I would like to shift data from a specific paritioned table of parent to separate table.Can someone suggest what's the better way.
If I create a table
CREATE TABLE parent columns(a,b,c)partition by c
c Type is DATE.
CREATE TABLE dec_partition PARTITION OF parent FOR VALUES FROM '2021-02-12' to 2021-03-12;
Now I want to copy table dec_partition to separate table in single command with least time.
NOTE: The table has around 4million rows with 20 columns (1 of the column is in jsonb)

Postgres table sync between different schemas

I have table A in schema A and table B in schema B with same structure in the same database. Whenever a DML change happens in table A, I need the same in table B. For now, I am using triggers to do the same. Is there any better alternative than using triggers for this scenario?
As the tables belong to different microservices, I need one of the tables with data unmodified even if the other table is dropped.

Ignore specific column on pg_dump/pg_restore

Is it possible to ignore specific columns when dumping a PostgreSQL database using pg_dump? I have a table with columns X and Y. Is it possible to dump only column X with pg_dump?
Not directly; however, you could
CREATE TABLE b AS SELECT x FROM A;
and then pg_dump that.

Django-ORM Left join with all columns of both tables

i have two tables A and B, i need all the columns of both tables using django ORM(left join).
i am new bee to django and programming please help.
One way is to use the .values() callable on your query (though what you are asking is not very clear). This returns a querydict, rather than a queryset, but behaves more like a left join done SQL directly into the database - i.e returns the rows with null entries from table B.
Assuming table A has a foreign key to table B in the models file.
TableA.object.filter(your filters here).values(tableA__field1, tableA__field2 , ... \
tableB__field1, tableB__field2, etc).
https://docs.djangoproject.com/en/1.3/topics/db/aggregation/#values