how to move a postgres schema via file operations? - postgresql

I have a schema schema1 in a postgres database A. I want to have a duplicate of this schema (model + data) in database B under the name schema2.
What are my options ?
I currently :
* dump schema1 from database A
* sed my way through schema renaming in the dump : schema1 becomes schema2
* restore schema2 in database B
but I am looking for a more efficient procedure. For instance, via direct file operations on postgres binary files.
Thanks for your help
Jerome Wagner

First, be aware (as others have commented) that Postgresql and Mysql have different ideas on what is a SCHEMA. In Postgresql (and in the SQL standard) a schema is just a namespace inside a database, which you can use to qualify object names (analogous to directories and files; and there is a 'public' schema whichs is used as default for unqualified names). Schemas, then, are related to organization of names, not isolation: as long as we are inside a database, objects (tables, views...) from different schemas are mutually visible; so that, for example, a view can mix tables of different schemas, or a FK can refer to other schema. On the contrary, objects in different databases are isolated (they only share users and groups), you can't join tables of different databases.
A dump-restore is the only sane way I can think of, for copying a schema from one database to another. Even so, from the above, it might not be safe/possible if the schema depends on other schemas of the database (it's like you are copying the classes of a Java package from one project to another). I would not dream on attempting a copy of the binary files.

Related

Importing existing table data to a new table in different database (postgres)

i would like to import all data of a existing table of one database to a new table present inside different database in postgres, any suggestions will be helpful.
The easiest way would be to pg_dump the table and pg_restore in the target database.
In case it is not an option, you should definitely take a look a postgres_fdw (Foreign Data Wrapper), which allows you to access data from different databases - even from different machines. It is slightly more complex than the traditional export/import approach, but it creates a direct connection to the foreign table.
Take a look at this example.

Postgresql equivalent to H2 DROP ALL OBJECTS

In H2 database there is something called as:
DROP ALL OBJECTS;
which drops all existing views, tables, sequences, schemas, function aliases, roles, user-defined aggregate functions, domains, and users (except the current user). If DELETE FILES is specified, the database files will be removed when the last user disconnects from the database. (See here)
How can I do exactly same thing in Postgresql ?

Why use explicit schema prefix in Postgres functions?

I am using Postgres for microservice backends and the databases are designed to be small(ish) and simple.
We have four schemas in our databases:
live: all the functions, tables, etc used by the application
utest:unit tests
testframe: unit testing functions/framework
testdata: functions that create common test data
When the database is shipped to production ONLY the 'live' schema is retained, all the testing schema's are dropped.
So my question is: Is there any reason for functions in the 'live' schema to explicitly using the 'live.' schema prefix when referring to tables and calling other functions?
After much googling I am having a hard time making an argument for explicitly using the schema prefix.
Thanks, any comments are appreciated.
Always qualifying objects with their schema names is a good way of making sure that no other objects with the same name in other schemas can be used by mistake. For example, the pg_catalog schema is always on your search_path, so system objects might be chosen.

Create a Folder in Redshift

I am looking for a way to create folders inside the temp_08 folder. Temp_08 is the only folder I have write access to, so I need to create the folder INSIDE temp_08. I was looking to store tables inside this folder, so to more cleanly organize my tables. What is the best way to perform this function in Redshift?
Amazon Redshift is based on a fork of PostgreSQL. Therefore, it inherits many of the attributes of PostgreSQL.
To arrange your tables into more logical groups, you can use:
CREATE DATABASE
CREATE SCHEMA
From the Schemas documentation:
A database contains one or more named schemas. Each schema in a database contains tables and other kinds of named objects. By default, a database has a single schema, which is named PUBLIC. You can use schemas to group database objects under a common name. Schemas are similar to operating system directories, except that schemas cannot be nested.
Basically, a Database is a separate logical grouping. You connect to a specific database when connecting with Amazon Redshift. Schemas exist within a Database and a search path can determine which one to use (for example, a personal schema first, then a default schema).

Backup and Restoration of Running Schema In Another Database That Already Have Other Schemas

I have a running database with only one dba (i.e. other than sys, system) "abc". Under this oracle user I have tables, views, sequences, procedures, functions etc. Now I have to copy both the data and schema to another database at another machine that already have a dozen schemas running (one under each separate dba). I have following concerns:
(1) I have to rename the schema at old machine, from "abc" to "pqr" before moving to the new machine.
(2) Inside my procedures and functions, I am using AUTHID CURRENT_USER, therefore have to use "abc." qualifier before name of tables, views, sequences, procedures, functions. When changing schema name, is there some automatic way to change qualifiers too.
(3) In order to copy data, I know only one way, which is to take backup of database of only one user "abc" (i.e. not take backup of sys, system). Then restore that to the new database. Can this in anyway destroy the other schemas or their data.
(4) In my schema, I am creating oracle users with limited rights using a procedure. The new usernames are stored in a Users table. I am also creating database roles and associating users with roles. The rolenames are stored in a Roles table. When migrating to new machine I have to make sure to prefix my users and roles with something unique so I not disturb oracle users created by other schemas.
(5) I know that in the new database, there have to be a new dba user called "pqr". Do I also have to have sysdba privilege. I am not responsible about the whole database at new machine, I am responsible about my schema only. Being a sysdba, can I in anyway hurt other dbas (like dropping them, or changing their schemas). If I not have sysdba privilege, what limitations do I get. I am using OracleText so have to use some built-in packages. I also have to create physical directory on file system in windows. I also have to create, alter (change password), drop roles and users via stored procedures when connected to database using "pqr".
Both old and new database are running on separate dedicated machines. Its windows server 2003 with oracle 10gr1.
The simplest option would be to use the Oracle export utility (classic or DataPump) to take a logical backup of the abc schema in the first database and to import the backup using the Oracle import utility into the new database. If you're using the classic version, you'd use the FROMUSER and TOUSER parameters to specify that you want to import the data into a different schema. If you're using the DataPump version, you'd use the REMAP_SCHEMA parameter. The DataPump version will be more efficient if you have a relatively large amount of data.
Unfortunately, though, there is no way to change explicit schema qualifiers. You'll need to edit the code after you import it or pull the code from your source control system, edit the code, and deploy it to the new database.