Postgres equivalent to Oracle's "DIRECTORY" objects - postgresql

Is it possible to create "DIRECTORY" object in Postgres?
If not can some help me with a solution how implement it on PostgreSQL.

Not the best option, but you could use:
COPY (select 1) TO PROGRAM 'mkdir --mode=777 -p /path/to/your/directory/'
Note that only the last part of directory get the permissions set in mode.

There is no equivalent concept to an "Oracle directory" in Postgres.
The alternatives depend on why the "Oracle directory" is needed.
If the directory is needed to read and write files on the database server, then this can be done through Generic File Access Functions. Access to those functions is restricted to superusers (details in the linked section of the manual). If regular users should be able to use them, the best thing would be to create wrapper functions and then grant execute on those functions to the users in question.
For security reasons, only directories inside the database cluster can be accessed.
But it's possible to create symlinks inside the data directory that point to directories outside the data directory. Access privileges on those directories need to be properly setup for the postgres operating system user (the one under which the postgres process is started)
If the directory is needed to access e.g. CSV files through Oracle's external tables, then there is no need for a "directory". The file FDW foreign data wrapper, can access files outside the data directory (provided access privileges have been setup correctly on the file system level).

The question doesn't even make sense really. PostgreSQL is a database management system. It doesn't have files and directories.
The closest parallel I can think of is schemas - see CREATE SCHEMA.
Now, if you want to use COPY to write output to the server's disk and want to create a directory to put that output in... then no, there's nothing like that. But you can use PL/Perlu or PL/Pythonu to do it easily enough.

Related

Postgresql - restore SQL dump with tablespaces

I'm planning to move some tables to different tablespaces (folders) on my PROD Linux box.
Overnight DB backups are done using pg_dumpall
I have also DEV environment working under Windows OS Im usually restoring sql dump (made on Linux).
Im worrying now how to restore such sql dumps, having pointers to Linux partition, in Linux notation.
I read on various webpages that same folder structure has to be created in order to restore non-standard tablespaces. But folder paths in Windows and Linux looks totally different (c:\... vs /opt/...)
Is there any command line switch allowing remap tablespace to other (Windows-like location) during restore? If not how you guys manage that scenario ?
I guess I shoud be able to archieve that by editing this SQL dump file - but it's huge, few hundred gigs file, also it is a bit problematic to automate
You can retrieve the actual tablespace definitions with a separate pg_dumpall command. You still need to do some editing, but the output is not that large. (similar for users)
pg_dumpall --tablespaces-only mydatabasename >stuff.out
There is no option to remap tablespace names during import, so you will need to create them in your Windows installation with the same name - the actual location physical location ("folder structure") is irrelevant as the SQL dump only references them by name.
If the script contains the create tablespace command you need to change that command to use a directory/path name that exists on your system before you can run the SQL script. But you only need to change that, all other places will refer to the tablespace name, not the folder path.
Typically pg_dump is easier than pg_dumpall for moving databases around (e.g. because of tablespaces).

Where is the DATA_DUMP_DIR in sql developer

I'm trying to import a .dmp file using the Data Pump Import tool in oracle sql developer.
I'm connected to an oracle database running in a container on my local machine.
When I get to the step where I specify where the dump file is to import, where should I place the .dmp file?
DATA_PUMP_DIR is a default Oracle directory object. It isn't part of SQL Developer; the import tool is really just giving you a GUI equivalent of running impdp from the command line.
You can find the operating system location that Oracle directory object points to by querying the data dictionary:
select directory_path from all_directories where directory_name = 'DATA_PUMP_DIR';
The path that returns is on the database server (in your case that'll be inside your container too), and your dump file needs to go there.
You might want to create additional directory objects pointing to other locations, and grant suitable privileges to users to be able to access them; but they all need to be on the DB server and read/writable by the Oracle process owner on that server.
(They could be remote filesystems mounted on the server, they don't necessarily have to be local storage, but that's another issue and more operating-system specific. Again, in your case, you might be able to share a folder on your local machine with the container, if you don't want to copy the file into the container.)

Change path of Firebird Secondary database files

I have created a Firebird multi-file database
Main Database file D:\Database\MainDB.fdb
Secondary files (240 Files) located under D:\Database\DBFiles\Data001.fdb to D:\Database\DBFiles\Data240.fdb
When copy database to another location and trying to open it Firebird doesn't locate the files if they are not in D:\ partition
I want Firebird to locate the secondary files under Database\DBFiels folder at the new path.
So if I copy the database to C:\Database\MainDB.fdb
Firebird would open Data001.fdb in new path like C:\Database\DBFiles instead of old path in D:\Database\DBFiles where they were initially created
Can that be done with Firebird? if not, then how it should be done?
Update:
Finally I found out it's not possiable to change Firebird database secondary files usign Firebird.
but I found this Firebird FAQ mention GLINK tool but It doesn't support Firebird 3.x so I didn't test it, and It's not recommended to use it even with supported versions of Firebird.
Done what exactly?
UPD. I edited the very vague original question to make clear WHAT the topic starter wants.
You can not reliably "copy files with Firebird" - Firebird is not files copying tool. You can to a degree use EXTERNAL TABLE for raw files access, but very limited and not upon the database itself.
It is dangerous practice to "copy databases" while Firebird is working, because you would only copy part of the data. The recently updated data that is in memory cache but did not yet made it on disk would be lost. The database file would be inconsistent with some data updated and some not yet. When you "copy database files" you have first to shutdown either those databases or even the whole Firebird server.
Firebird has it's own tools for moving databases around - and those are called backup/restore tools. Maybe what you need is nbackup tool, if gbak is too slow for you.
Finally, you can list files that comprise the database. You can do it via gstat utility or via "Services API" it uses. You also can select from RDB$FILES system table. However what would you do after you did it? The very access to the database makes it badly suited for consequent copying (#2). You would perhaps need to shutdown database, turn it to read-only AND single-user state, and only then attach to it and read RDB$FILES. And after copying done - you would have to de-shutdown the database. Kinda much more complex than nbackup.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gstat-example-header.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-files.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gbak.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/nbackup.html

Modify data in a database backup file

I have a .bak file of a database that contains PHI located on a server in a PHI environment. I have a .r script that will anonymize and remove PHI data in the database and any other connections strings that point to client systems. Is there a way to run this script (or another type of script) to modify the .bak file in the PHI environment before I move it to a non-PHI compliant environment to be restored?
I am guessing that a ".bak file" is a backup (there is no standard Progress-wide naming convention but that is a common usage). If so then, no, there is no way to directly manipulate the backup without restoring it.
You could restore it in a safe environment, modify the contents of the restored db and then make a new backup of the anonymized database. (If I were to do something like that I would use a different extension, perhaps .BKX, for the scrubbed backup so that it is obvious to people which one is which - that would help to reduce the chances of accidental release...)

Making PostgreSQL create directory for new tablespace

i would like to be able to do something like
CREATE TABLESPACE bob location 'C:\a\b\c\d\e\f\bob'
without needing to create all the directory tree beforehand.
this is because i have java code that creates tablespaces on the fly and i would like to be able to run it on a separate machine (so it couldnt mkdir() or anything).
is there any sort of postgres configuration that would allow me to make postgres create the appropriate directory tree by itself?
You could try do mkdir directly in postgres stored procedure using PL/sh or any of your favorite PL/* languages that are available for PostgreSQL