Move or not pg_catalog schema to new database - postgresql

I want to move schemas and data of my current Postgres 11 database to a new Postgres 13 database.
I do pg_dump for that purpose to firstly dump to files schemas, data etc. I am not sure about whether should i also take pg_catalog schema. Can someone advice?

pg_catalog contains the database metadata. When you export a database, you do not directly export data from these tables. They get exported implicitly in the shape of the CREATE and ALTER statements that pg_dump generates.
pg_dump will export all data from a database (but it excludes the definition of users and tablespaces and permissions on the database).

Related

How to copy all my tables from a Postgres db to another one without pg_dump?

I need to copy all my tables from my Postgres database to another one. Usually I'd do it with pg_dump and then pg_restore, but unfortunately I don't have rds_superuser permissions and so it doesn't work.
Basically I have to copy the data table by table, create table in the other db and then import the data. This is how I would do if I'd have to do it manually.
Does anybody know how to do this in a programmatic way?
Thanks!
If you have createdb privilege, you can try using something like below.
CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER your_user_name;
It will copy all your tables(with structure) and data to your new db from your old one.

Group many postgresql databases into separate schemas into same database

We have many postgresql databases with the same structure using only public shcema on each one.
How can I group all of them in a single database using separate schemas?
You can dump the database definition and data out, edit the output by putting the default schema as whatever you choose and run the scripts back into database.
Remember to make the dump in SQL format, pg_dump with default custom format won't work. The schema change will only need a change on a row like
SET search_path TO *whateverschema*
If you don't want to edit the dumps (maybe they're very large), you can of course also restore them one by one to the public schema, alter the tables into the desired schema and then repeat for the next one.
There is no special way to convert an existing database into a schema in another database unfortunately.
I forgot to post the answer afer all klin comment was the answer, this step was the solution,
Inside customer_x database:
alter schema public rename to customer_x;
And then take pg_dump customer_x:
pg_dump "customer_x" --schema "customer_x" -f customer_x.sql
Inside new conglomerated database:
DROP schema customer_x CASCADE;
create schema customer_x;
Then load the dump of customer_x:
psql "conglomerated_database" -f customer_x.sql

restoring database from pg_dump file creates strange tables

I have backup created like this:
pg_dump dbname > file
I am trying to restore the database (after drop database and create database) like this:
psql dbname < file
What I get is a database full of tables that are created with dbname.tablename instead of just tablename.
How do I restore a postgres database making sure the tables it creates has just tablename and not dbname.tablename?
Thanks to #Craig Ringer for pointing me in the right direction.
Yes, there was SET search_path on the database for the original DB. This created the table names with schema names prefixed to table names.
Removing or commenting those out of the backup script created tables without a schema prefix. Which was desirable. But the restore didn't result in complete restore, and many tables got left out.
So did the restore, with usual means. Tables are created with schema names prefixed. The sql query scripts broke because they were not specifying the schema names every time they queried the table. To fix this, I followed this - https://stackoverflow.com/a/2875705/1945517
ALTER ROLE <your_login_role> SET search_path TO dbname;
This fixed the broken queries.

Restore Particular tables in one schema to another schema postgresql

In my case, i have backup file in full database. Now I want to restore some particulars tables in public schema.
Those tables are already stored in another schema. Any Feasible solution is there? and How to do it
It's not clear whether the tables you want to restore were backed up from the schema "public", or whether they were backed up from a different schema.
If your backup is in archive format, not in a plain text format, you can restore individual tables (see the -n and -t options to pg_restore). As far as I know, you can't restore them to a different schema. Instead, you'd restore them to their original schema, then move each table with ALTER TABLE table_name SET SCHEMA new_schema;.
Since you already have tables of the same name in the target schema, I expect you'll have to rename them before restoring from backup. After you restore from backup, and you move the restored tables to the schema "public", you can rename those tables to their original names. PostgreSQL understands that public.table_name and new_schema.table_name are different tables.

How to delete database tables in postgres

I have created some databases with postgres, and put some data in them.
The problem is when I delete/drop database, and then create new database, the new database always contains tables and data from the first database that was created with postgres.
How can I delete database so that, when the new database is created it dosent contain data from old database?
Thanks
It sounds like you created tables in the template1 database (or you specify the TEMPLATE xyz option with your CREATE DATABASE statement).
To get rid of the tables in the template1 database, connect to it and drop all tables there. After that new database will not contain those tables any more.
Delete the rows in table using truncate commnad
truncate <tablename>
then delete database using drop command
drop database <databasename>