I am using postgres as database in my production system.
Version: psql (PostgreSQL) 12.9
Os: Ubuntu 18.04
PgAmin 4: version 6.4
Server instance: AWS RDS
My database is growing day by day and now it scales in GBs. i want to take back up of database in local system but the thing is not to take whole back up as production database is so big and i dont need some of the data as they are in GBs.
Example:
If my database has 5 tables,
table A, table B, table C, table D, table E
I want to take back up of schema(structure) of all table but not the data of all table.
Like,
Table name
Schema
Data
Table A
yes
Yes
Table B
yes
No
Table C
yes
No
Table D
yes
No
Table E
yes
Yes
Here,
Schema = yes means include schema of table in back up of database
Schema = No means exclude schema of table in back up of database
Data = yes means include data of table in back up of database
Data = No means exclude data of table in back up of database
How can i take local back up like above mentioned way?
My aim is to ignore data of some table which have more data and scales in GBs.
Related
For a long time I have been working only with Oracle Databases and I haven't had much contact with PostgreSQL.
So now, I have a few questions for people who are closer to Postgres.
Is it possible to create a connection from Postgres to Oracle (oracle_fdw?) and perform selects on views in a different schema than the one you connected to?
Is it possible to create a connection from Postgres to Oracle (oracle_fdw?) and perform inserts on tables in the same schema as the one you connected to?
Ad 1:
Yes, certainly. Just define the foreign table as
CREATE FOREIGN TABLE view_1_r (...) SERVER ...
OPTIONS (table 'VIEW_1', schema 'USERB');
Ad 2:
Yes, certainly. Just define a foreign table on the Oracle table and insert into it. Note that bulk inserts work, but won't perform well, since there will be a round trip between PostgreSQL and Oracle for each row inserted.
Both questions indicate a general confusion between a) the Oracle user that you use to establish the connection and b) the schema of the table or view that you want to access. These things are independent: The latter is determined by the schema option of the foreign table definition, while the former is determined by the user mapping.
I have two databases - Cloudant and IBM Db2. I have a table in each of these databases that hold static data that is only read from and never updated. These were created a long time ago and I'm not sure if they are used today so I wish to do a clean-up.
I want to determine if these tables or rows from these tables, are still being read from.
Is there a way to record the read timestamp (or at least know if it is simply accessed like a dirty bit) on a row of the table when it is read from?
OR
Record the read timestamp of the entire table (if any record from it is accessed)?
There is SYSCAT.TABLES.LASTUSED system catalog column in Db2 for DML statements on whole table.
There is no way to track each table row read access.
Can we change data directory for single table or database in postgresql.
Actually my requirement is that I want to keep all tables data in C drive but customers table data in D drive. how to achieve this?
You should create a tablespace for the tables outside the data directory.
For example:
CREATE TABLESPACE tbsp LOCATION 'D:\customer_tables';
Then add TABLESPACE tbsp to all CREATE TABLE statements that should be on D.
I have table A in schema A and table B in schema B with same structure in the same database. Whenever a DML change happens in table A, I need the same in table B. For now, I am using triggers to do the same. Is there any better alternative than using triggers for this scenario?
As the tables belong to different microservices, I need one of the tables with data unmodified even if the other table is dropped.
I have a sqlite db file which includes 2 tables and each has over 800k records. the sqlite db file size is 186MB.
I was planned to migrate those records to postgresql database.
In order to made it, I followed these steps:
1) prepared a view to unify these 2 tables in sqlite (they have relation)
2) create a table with one column type jsonb in postgres
3) made a simple program to read from sqlite and then write to postgresql
all 3 steps worked fine... sadly I didn't get what I expected!
postgresql table size is 367MB...
I thought that was going to be much lesser!
How could that be possible the sqlite tables (800k *2 records) consume less disk space than postgresql with one jsonb column and half record?!