I have installed PostgreSQL 13.1 on two computers, a laptop and desktop.
I created database using a Tablespace on a portable SSD drive under F:\MyProject\DB\PG_13_202007201\20350. So I expect all the database files to be in there somewhere.
I want to work on the database on the SSD drive which is fine on the laptop that created the database, but when I go to my desktop I can't figure out how to attach (SQL Server style) the existing database/tablespace and continue working on it.
Is there a way to work like this with Postgresql?
You cannot do that.
A tablespace contains just the data files, but essential information is stored in the metadata (the table name, the columns and their data type, constraints, ...) and elsewhere (for example, the commit log determines which table entries you can see and which ones you cannot).
In short, a tablespace is not self-contained, and you cannot use a tablespace from one database cluster with another database cluster.
If you want to move data between databases, use pg_dump.
Related
Is it possible to dis-connect and re-connect a POSTGRES tablespace and all the associated objects within that tablespace?
I have a Postgres database with two tablespaces, one on a high-speed SSD drive (I've named this FASTSPACE) , and the other on a slower, traditional magnetic HDD (named SLOWSPACE). The slower tablespace is reserved for large volumes of historic data which is rarely accessed.
Is it possible to temporarily disconnect SLOWSPACE, with the intention of reconnecting it at a later date? the DROP TABLESPACE documentation can only be used once all database objects within it have been dropped.
I'm aware that I can backup all the tables in SLOWSPACE, then delete them, and then DROP the tablespace, however this will take time (there are several Terabytes of data). If I then need the archived data again I'll have create a new version of the SLOWSPACE tablespace from blank, then re-create all the objects from the backups. Again, this will take time.
Is there any way of temporarily disconnecting SLOWSPACE from the database - whilst still leaving the rest of the database up and running?
Update - happy to accept Franks Heikens two letter answer - 'no'
Per doc of postgres,
Databases are physically separated and access control is managed at the connection level.
Is there any further details that how Postgres achieves this physical isolation? Are those files used to store data in the backend totally separate?
Is there any further details that how postgres archives this physical isolation? Are those files used to store data in the backend totally separate?
Yes. Each table is stored as a separate file (actually, multiple files). Different databases are in different directories. Indexes, etc are also in one or more separate files.
However, there's a lot of shared state. Some system tables are shared between all databases. The write-ahead log (WAL) is also shared, as is the commit log (pg_clog). So you cannot just extract one database's files and attach it to another PostgreSQL instance. They're meaningless without some of the shared files.
I have a mobile/web project, using pg9.3 as database, and linux as server.
The data won't be huge, but as time goes on, the data increase.
For long term considering, I want to know about:
Questions:
1. Is it necessary for me to create tablespace for my database, or just use the default one?
2. If I create new tablespace, what is the proper location on linux to create the folder, and why?
3. If I don't create it now, and wait until I have to, till then, will it be easy for me to migrate db with data to new tablespace?
Just use the default tablespace, do not create new tablespaces. Tablespaces are only useful if you have multiple physical disks, so you can define which data is stored on which physical disk. The directory where your data is located is not that important for the workings of postgres, so if you only have one disk it is useless to use tablespaces
Should your data grow beyond the capacity of 1 disk, you will have to perform a full data migration anyway to move it to another physical disk, so you can configure tablespaces at that time
The idea behind defining which data is located on which disk (with tablespaces) is that you can do things like putting a big table which is hardly used on a slow disk, and putting this very intensively used table on a separated faster disk. But I assume you're not there yet, so don't over complicate things
I have a running database with only one dba (i.e. other than sys, system) "abc". Under this oracle user I have tables, views, sequences, procedures, functions etc. Now I have to copy both the data and schema to another database at another machine that already have a dozen schemas running (one under each separate dba). I have following concerns:
(1) I have to rename the schema at old machine, from "abc" to "pqr" before moving to the new machine.
(2) Inside my procedures and functions, I am using AUTHID CURRENT_USER, therefore have to use "abc." qualifier before name of tables, views, sequences, procedures, functions. When changing schema name, is there some automatic way to change qualifiers too.
(3) In order to copy data, I know only one way, which is to take backup of database of only one user "abc" (i.e. not take backup of sys, system). Then restore that to the new database. Can this in anyway destroy the other schemas or their data.
(4) In my schema, I am creating oracle users with limited rights using a procedure. The new usernames are stored in a Users table. I am also creating database roles and associating users with roles. The rolenames are stored in a Roles table. When migrating to new machine I have to make sure to prefix my users and roles with something unique so I not disturb oracle users created by other schemas.
(5) I know that in the new database, there have to be a new dba user called "pqr". Do I also have to have sysdba privilege. I am not responsible about the whole database at new machine, I am responsible about my schema only. Being a sysdba, can I in anyway hurt other dbas (like dropping them, or changing their schemas). If I not have sysdba privilege, what limitations do I get. I am using OracleText so have to use some built-in packages. I also have to create physical directory on file system in windows. I also have to create, alter (change password), drop roles and users via stored procedures when connected to database using "pqr".
Both old and new database are running on separate dedicated machines. Its windows server 2003 with oracle 10gr1.
The simplest option would be to use the Oracle export utility (classic or DataPump) to take a logical backup of the abc schema in the first database and to import the backup using the Oracle import utility into the new database. If you're using the classic version, you'd use the FROMUSER and TOUSER parameters to specify that you want to import the data into a different schema. If you're using the DataPump version, you'd use the REMAP_SCHEMA parameter. The DataPump version will be more efficient if you have a relatively large amount of data.
Unfortunately, though, there is no way to change explicit schema qualifiers. You'll need to edit the code after you import it or pull the code from your source control system, edit the code, and deploy it to the new database.
I'm interested to get the physical locations of tables, views, functions, data/content available in the tables of PostgreSQL in Linux OS. I've a scenario that PostgreSQL could be installed in SD-Card facility and Hard-Disk. If I've tables, views, functions, data in SD, I want to get the physical locations of the same and merge/copy into my hard-disk whenever I wish to replace the storage space. I hope the storage of database should be in terms of plain files architecture.
Also, is it possible to view the contents of the files? I mean, can I access them?
Kevin and Mike already provided pointers where to find the data directory. For the physical location of a table in the file system, use:
SELECT pg_relation_filepath('my_table');
Don't mess with the files directly unless you know exactly what you are doing.
A database as a whole is represented by a subdirectory in PGDATA/base:
If you use tablespaces it gets more complicated. Read details in the chapter Database File Layout in the manual:
For each database in the cluster there is a subdirectory within
PGDATA/base, named after the database's OID in pg_database. This
subdirectory is the default location for the database's files; in
particular, its system catalogs are stored there.
...
Each table and index is stored in a separate file. For ordinary
relations, these files are named after the table or index's filenode
number, which can be found in pg_class.relfilenode.
...
The pg_relation_filepath() function shows the entire path (relative to
PGDATA) of any relation.
Bold emphasis mine.
The manual about the function pg_relation_filepath().
The query show data_directory; will show you the main data directory. But that doesn't necessarily tell you where things are stored.
PostgreSQL lets you define new tablespaces. A tablespace is a named directory in the filesystem. PostgreSQL lets you store individual tables, indexes, and entire databases in any permissible tablespace. So if a database were created in a specific tablespace, I believe none of its objects would appear in the data directory.
For solid run-time information about where things are stored on disk, you'll probably need to query pg_database, pg_tablespace, or pg_tables from the system catalogs. Tablespace information might also be available in the information_schema views.
But for merging or copying to your hard disk, using these files is almost certainly a Bad Thing. For that kind of work, pg_dump is your friend.
If you're talking about copying the disk files as a form of backup, you should probably read this, especially the section on Continuous Archiving and Point-in-Time Recovery (PITR):
http://www.postgresql.org/docs/current/interactive/backup.html
If you're thinking about trying to directly access and interpret data in the disk files, bypassing the database management system, that is a very bad idea for a lot of reasons. For one, the storage scheme is very complex. For another, it tends to change in every new major release (issued once per year). Thirdly, the ghost of E.F. Codd will probably haunt you; see rules 8, 9, 11, and 12 of Codd's 12 rules.