I have created a Firebird multi-file database
Main Database file D:\Database\MainDB.fdb
Secondary files (240 Files) located under D:\Database\DBFiles\Data001.fdb to D:\Database\DBFiles\Data240.fdb
When copy database to another location and trying to open it Firebird doesn't locate the files if they are not in D:\ partition
I want Firebird to locate the secondary files under Database\DBFiels folder at the new path.
So if I copy the database to C:\Database\MainDB.fdb
Firebird would open Data001.fdb in new path like C:\Database\DBFiles instead of old path in D:\Database\DBFiles where they were initially created
Can that be done with Firebird? if not, then how it should be done?
Update:
Finally I found out it's not possiable to change Firebird database secondary files usign Firebird.
but I found this Firebird FAQ mention GLINK tool but It doesn't support Firebird 3.x so I didn't test it, and It's not recommended to use it even with supported versions of Firebird.
Done what exactly?
UPD. I edited the very vague original question to make clear WHAT the topic starter wants.
You can not reliably "copy files with Firebird" - Firebird is not files copying tool. You can to a degree use EXTERNAL TABLE for raw files access, but very limited and not upon the database itself.
It is dangerous practice to "copy databases" while Firebird is working, because you would only copy part of the data. The recently updated data that is in memory cache but did not yet made it on disk would be lost. The database file would be inconsistent with some data updated and some not yet. When you "copy database files" you have first to shutdown either those databases or even the whole Firebird server.
Firebird has it's own tools for moving databases around - and those are called backup/restore tools. Maybe what you need is nbackup tool, if gbak is too slow for you.
Finally, you can list files that comprise the database. You can do it via gstat utility or via "Services API" it uses. You also can select from RDB$FILES system table. However what would you do after you did it? The very access to the database makes it badly suited for consequent copying (#2). You would perhaps need to shutdown database, turn it to read-only AND single-user state, and only then attach to it and read RDB$FILES. And after copying done - you would have to de-shutdown the database. Kinda much more complex than nbackup.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gstat-example-header.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-files.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gbak.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/nbackup.html
Related
This is on several MacBook Pros and iMac, all running Monterey (OSX 12.3)
I would like to store a readonly postgres database on my Dropbox, so that I can use access it from multiple computers. I realize storing a writeable database is unworkable, but I'm not trying to do that.
I created a database under /Users/me/Dropbox/dbs on my MacBook Pro. But the other machines don't see the contents of the directory.
I figured it was because postgres has a different UID on the various machines, so I tried doing chown -R me:staff on the db database directory on the MacBook Pro. That made the directories and their contents visible in the Dropbox on the other machines, but I can't start postgres on the MacBook Pro (haven't tried elsewhere yet), because of file permissions, even after making the complained of file go=rx. I'm thinking since the file is in group=staff and not in postgres that's not going to work.
Any ideas? For example: can I create the database as me:staff, rather and postgres:postgres?
Like Laurenz says, this is never going to work properly. The closest you could get I suspect is some foreign data wrapper to the read-only shared directory. That's not going to provide the experience you are looking for I suspect though.
The SQLite wrapper might be your best bet. Store the shared data in SQLite and access it through PG. https://github.com/pgspider/sqlite_fdw
PostgreSQL is not Microsoft Access, it is a relational database with a client-server architecture. It works by starting the database server on the data directory and then connecting to the server process with a client. You can only run a single instance of the database server on a single data directory, and that is all you need, since many clients can connect to the server at the same time. The database directory has to be writable by the server and indeed is modified, even if you only perform read operations in the database.
Don't shoot me, I'm only the OP!
When needing to backup our DB, we always are able to shutdown postgresql completely. After it is down, I found I could copy the "/base" directory with the binary data in it to another location. Checksum the accuracy and am later able to restore that if/when necessary. This has even worked when upgrading to a later version of postgresql. Integrity of the various 'conf' files is not an issue as that is done elsewhere (ie. by other processes/procedures!) in the system.
Is there any risk to this approach that I am missing?
The "File System Level Backup" link in Abelisto's comment is what JoeG is talking about doing. https://www.postgresql.org/docs/current/static/backup-file.html
To be safe I would go up one more level, to "main" on our ubuntu systems to take the snapshot, and thoroughly go through the caveats of doing file-level backups. I was tempted to post the caveats here, but I'd end up quoting the entire page.
The thing to be most aware of (in a 'simple' postgres environment ) is the relationship between the postgres database, a user database and the pg_clog and pg_xlog files. If you only get the "base" you lose the transaction and WAL information, and in more complex installations, other 'necessary' information.
If those caveat conditions listed do not exist in your environment, and you can do a full shutdown, this is a valid backup strategy, which can be much faster than a pg_dump.
We're serving a Django/Postgres site running on a VM hypervisor. We're now trying to figure out our back up strategy and have two probable options:
Back up the DB directly using pg_dump
Back up the VM directly by copying the VM image
I'm with the latter as I think, I could simply back up everything that has to do with the site. I'm not sure whether I have to shut down the VM for this though.
What is a better and more recommended way of backing up a DB? Are there any reasons for not using the VM backup?
Thanks
The question basically boils down to, can you consider a hot copy of PostgreSQL's data files a backup?
The answer is: not really. PostgreSQL tries very hard through the use of WAL to ensure that its files are in a consistent state all the time and that it can survive a power failure, but starting it up from a copy of these files puts PostgreSQL into recovery mode. If the backup happened at the wrong second and PostgreSQL can't recover from the state of these files, your backup is useless. You don't want your backup/restore mechanism to depend on the recovery mechanism (unless you're dealing with "crash only" software, which PostgreSQL is not).
The probability of PostgreSQL not being able to recover from these files is not high, but it's not zero either. The probability of PostgreSQL not being able to load an SQL dump that it made, on the other hand, is zero. I prefer backup choices with lower probabilities of failure. pg_dump was designed for doing backups.
PostgreSQL recommends using pg_dump for backups, as a file system (or VM) backup requires the database to be shut down (and has other drawbacks):
http://www.postgresql.org/docs/8.1/static/backup-file.html
Edit: Also, a pg_dump backup will be significantly smaller than a filesystem dump of the same database.
There is an additional option. With PostgreSQL you can make an online backup that allows you to snapshot the file system and maintain consistency. You can see details here:
http://www.postgresql.org/docs/9.0/static/continuous-archiving.html
We use this exact method for making backups when we run PostgreSQL in a VM.
I need to automate the creation of a duplicate db from the .bak of my production db. I've done the operation plenty of times via the GUI but when executing from the commandline I'm a little confused by the various switches, in particular, the filenames and being sure ownership is correctly replicated.
Just looking for the TSQL syntax for RESTORE that accomplishes that.
Assuming you're using SQL Server 2005 or 2008, the simplest way is to use the "Script" button at the top of the restore database dialog in SQL Server Management Studio. This will automatically create a T-SQL script with all the options/settings configured in the way you've filled in the dialog.
look here: How to: Restore a Database to a New Location and Name (Transact-SQL), which has a good example:
This example creates a new database
named MyAdvWorks. MyAdvWorks is a
copy of the existing AdventureWorks
database that includes two files:
AdventureWorks_Data and
AdventureWorks_Log. This database uses
the simple recovery model. The
AdventureWorks database already
exists on the server instance, so the
files in the backup must be restored
to a new location. The RESTORE
FILELISTONLY statement is used to
determine the number and names of the
files in the database being restored.
The database backup is the first
backup set on the backup device.
USE master
GO
-- First determine the number and names of the files in the backup.
-- AdventureWorks_Backup is the name of the backup device.
RESTORE FILELISTONLY
FROM AdventureWorks_Backup
-- Restore the files for MyAdvWorks.
RESTORE DATABASE MyAdvWorks
FROM AdventureWorks_Backup
WITH RECOVERY,
MOVE 'AdventureWorks_Data' TO 'D:\MyData\MyAdvWorks_Data.mdf',
MOVE 'AdventureWorks_Log' TO 'F:\MyLog\MyAdvWorks_Log.ldf'
GO
This may help also: Copying Databases with Backup and Restore
Currently, I have an application that uses Firebird in embedded mode to connect to a relatively simple database stored as a file on my hard drive. I want to switch to using PostgreSQL to do the same thing (Yes, I know it's overkill). I know that PostgreSQL cannot operate in embedded mode and that is fine - I can leave the server process running and that's OK with me.
I'm trying to figure out a connection string that will achieve this, but have been unsuccessful. I've tried variations on the following:
jdbc:postgresql:C:\myDB.fdb
jdbc:postgresql://C:\myDB.fdb
jdbc:postgresql://localhost:[port]/C:\myDB.fdb
but nothing seems to work. PostgreSQL's directions don't include an example for this case. Is this even possible?
You can trick it. If you are running PostGRESQL on a UNIXlike system, then you should be able to create a RAMDISK and use that for the database storage. Here's a pretty good step by step guide for RAMdisks on Linux.
In general though, I would suggest using SQLITE for an SQL db in RAM type of application.
Postgres databases are not a single file. There will be one file for each table and each index in the data directory, inside a directory for the database. All files will be named with the object ID (OID) of db / table / index.
The JDBC urls point to the database name, not any specific file:
jdbc:postgresql:foodb (localhost is implied)
If by "disk that behaves like memory", you mean that the db only exists for the lifetime of your program, there's no reason why you can't create a db at program start and drop it at program exit. Note that this is just DDL to create the DB, not creating the data dir via the init-db program. You could connect to the default 'postgres' db, create your db then connect to it.
Firebird 2.1 onwards supports global temporary tables, which only exist for the duration of the database connection.
Syntax goes something like CREATE GLOBAL TEMPORARY TABLE ... ON COMMIT PRESERVE ROWS