Firebird Shadow on a shared folder (windows os) - firebird

I'm using isql command to create a shadow file on a shared windows folder!
is this the right command?
SQL> create shadow 5 AUTO '\\bma-pc\shadows\PERSO5.SHD';
Statement failed, SQLSTATE = HY000
unsuccessful metadata update
-A node name is not permitted in a secondary, shadow, cache or log file name
It's fine on a local folder

Firebird prohibits this by default. It's primarily because it cannot guarantee exclusive access or the order of writes. Look for the RemoteFileOpenAbility option in firebird.conf to override. There are comments on why this is risky.

Related

Postgresql - restore SQL dump with tablespaces

I'm planning to move some tables to different tablespaces (folders) on my PROD Linux box.
Overnight DB backups are done using pg_dumpall
I have also DEV environment working under Windows OS Im usually restoring sql dump (made on Linux).
Im worrying now how to restore such sql dumps, having pointers to Linux partition, in Linux notation.
I read on various webpages that same folder structure has to be created in order to restore non-standard tablespaces. But folder paths in Windows and Linux looks totally different (c:\... vs /opt/...)
Is there any command line switch allowing remap tablespace to other (Windows-like location) during restore? If not how you guys manage that scenario ?
I guess I shoud be able to archieve that by editing this SQL dump file - but it's huge, few hundred gigs file, also it is a bit problematic to automate
You can retrieve the actual tablespace definitions with a separate pg_dumpall command. You still need to do some editing, but the output is not that large. (similar for users)
pg_dumpall --tablespaces-only mydatabasename >stuff.out
There is no option to remap tablespace names during import, so you will need to create them in your Windows installation with the same name - the actual location physical location ("folder structure") is irrelevant as the SQL dump only references them by name.
If the script contains the create tablespace command you need to change that command to use a directory/path name that exists on your system before you can run the SQL script. But you only need to change that, all other places will refer to the tablespace name, not the folder path.
Typically pg_dump is easier than pg_dumpall for moving databases around (e.g. because of tablespaces).

DB2 SQL3706N A disk full error was encountered

I have nearly 600+ files to load in DB2 database version 10.5.9. Each file size is nearly 200 MB. I have a batch script to upload each files in a loop.
My Disk "/mnt/blumeta0/db2/copy"size is 16 GB
If i run this upload with nonrecoverable mode it works. But i cant do that in my prod database.
I tried to db3 connect refresh and db3 terminate after each file uploaded but does not worked.
Manually cleaned up disk /mnt/blumeta0/db2/copy but total size of all files is more than 16 GB so got same error.
I cannot clean folder in script as clean up can be done with super user.
db2 "LOAD FROM $i OF DEL INSERT INTO <table_name>"
SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
How DB2 server cleans copy folder? Is there any other alternative i can try?
You suggested that the Load succeeds when using NONRECOVERABLE mode, however fails otherwise with error "SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
I'm guessing that the Load is being performed using the COPY YES option. Since the Load command that you pasted does not show the COPY YES option, I'm guessing that you have a special configuration setting enabled that forces Load operations to use COPY YES in order to prevent the table from becoming inaccessible in a rollforward recovery event or HADR standby takeover event. The name of this configuration setting (registry variable) is "DB2_LOAD_COPY_NO_OVERRIDE".
When the Load is performed with COPY YES, a copy of the table pages/extents that were generated during the Load operation is written into a copy image file.
I suspect that you have the registry variable "DB2_LOAD_COPY_NO_OVERRIDE=COPY YES /mnt/blumeta0/db2/copy" configured (you can use db2set -all on the database server to display all configured registry variables). If so, the copy image files are being stored in this path, which at 16GB appears to be too small to contain them all.
You can consider changing the location of this path to somewhere with more disk space, however the path should always be accessible in the event of a database rollforward recovery or hadr standby takeover, otherwise the table will not be accessible after such an event.

Change path of Firebird Secondary database files

I have created a Firebird multi-file database
Main Database file D:\Database\MainDB.fdb
Secondary files (240 Files) located under D:\Database\DBFiles\Data001.fdb to D:\Database\DBFiles\Data240.fdb
When copy database to another location and trying to open it Firebird doesn't locate the files if they are not in D:\ partition
I want Firebird to locate the secondary files under Database\DBFiels folder at the new path.
So if I copy the database to C:\Database\MainDB.fdb
Firebird would open Data001.fdb in new path like C:\Database\DBFiles instead of old path in D:\Database\DBFiles where they were initially created
Can that be done with Firebird? if not, then how it should be done?
Update:
Finally I found out it's not possiable to change Firebird database secondary files usign Firebird.
but I found this Firebird FAQ mention GLINK tool but It doesn't support Firebird 3.x so I didn't test it, and It's not recommended to use it even with supported versions of Firebird.
Done what exactly?
UPD. I edited the very vague original question to make clear WHAT the topic starter wants.
You can not reliably "copy files with Firebird" - Firebird is not files copying tool. You can to a degree use EXTERNAL TABLE for raw files access, but very limited and not upon the database itself.
It is dangerous practice to "copy databases" while Firebird is working, because you would only copy part of the data. The recently updated data that is in memory cache but did not yet made it on disk would be lost. The database file would be inconsistent with some data updated and some not yet. When you "copy database files" you have first to shutdown either those databases or even the whole Firebird server.
Firebird has it's own tools for moving databases around - and those are called backup/restore tools. Maybe what you need is nbackup tool, if gbak is too slow for you.
Finally, you can list files that comprise the database. You can do it via gstat utility or via "Services API" it uses. You also can select from RDB$FILES system table. However what would you do after you did it? The very access to the database makes it badly suited for consequent copying (#2). You would perhaps need to shutdown database, turn it to read-only AND single-user state, and only then attach to it and read RDB$FILES. And after copying done - you would have to de-shutdown the database. Kinda much more complex than nbackup.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gstat-example-header.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-files.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gbak.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/nbackup.html

Postgres equivalent to Oracle's "DIRECTORY" objects

Is it possible to create "DIRECTORY" object in Postgres?
If not can some help me with a solution how implement it on PostgreSQL.
Not the best option, but you could use:
COPY (select 1) TO PROGRAM 'mkdir --mode=777 -p /path/to/your/directory/'
Note that only the last part of directory get the permissions set in mode.
There is no equivalent concept to an "Oracle directory" in Postgres.
The alternatives depend on why the "Oracle directory" is needed.
If the directory is needed to read and write files on the database server, then this can be done through Generic File Access Functions. Access to those functions is restricted to superusers (details in the linked section of the manual). If regular users should be able to use them, the best thing would be to create wrapper functions and then grant execute on those functions to the users in question.
For security reasons, only directories inside the database cluster can be accessed.
But it's possible to create symlinks inside the data directory that point to directories outside the data directory. Access privileges on those directories need to be properly setup for the postgres operating system user (the one under which the postgres process is started)
If the directory is needed to access e.g. CSV files through Oracle's external tables, then there is no need for a "directory". The file FDW foreign data wrapper, can access files outside the data directory (provided access privileges have been setup correctly on the file system level).
The question doesn't even make sense really. PostgreSQL is a database management system. It doesn't have files and directories.
The closest parallel I can think of is schemas - see CREATE SCHEMA.
Now, if you want to use COPY to write output to the server's disk and want to create a directory to put that output in... then no, there's nothing like that. But you can use PL/Perlu or PL/Pythonu to do it easily enough.

Database MSDB can not be opened

I have got this problem in local instance of SQL Server 2008 R2 on my machine. There are several databases on this instance. But I am not able to see any of them from the object explorer.
I am able to query my databases from the new query window. But not able to see any of them.
Whenever I try to explore the databases I get this error :
Database 'msdb' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information. (Microsoft SQL Server, Error: 926).
I have tried
Refreshing the connection
Reconnecting the connection
Restarting the service Sql Server (MSSQLSERVER).
Restarting the SQL Server Management Studio
Restarting my machine
I have also tried combinations of above, but nothing works.
My operating system is Windows 7 Ultimate (64 bit).
SQL Server Management Studio Version is 10.50.2500.0.
I found my answer in this link.
EDIT : Including both the solutions from link because of possible Linkrot in future.
Login with sa account, for both the solutions.
Solution 1
Open new query window
EXEC sp_resetstatus 'DB_Name'; (Explanation :sp_resetstatus turns off the suspect flag on a database. This procedure updates the mode and status columns of the named database in sys.databases. Also note that only logins having sysadmin privileges can perform this.)
ALTER DATABASE DB_Name SET EMERGENCY; (Explanation : Once the database is set to EMERGENCY mode it becomes a READ_ONLY copy and only members of sysadmin fixed server roles have privileges to access it.)
DBCC checkdb('DB_Name'); (Explanation : Check the integrity among all the objects.)
ALTER DATABASE DB_Name SET SINGLE_USER WITH ROLLBACK IMMEDIATE; (Explanation : Set the database to single user mode.)
DBCC CheckDB ('DB_Name', REPAIR_ALLOW_DATA_LOSS); (Explanation : Repair the errors)
ALTER DATABASE DB_Name SET MULTI_USER; (Explanation : Set the database to multi user mode, so that it can now be accessed by others.)
Solution 2
In Object Explorer --> The opened connection item --> rightclick --> Stop
Open Control Panel --> Administrative Tools --> Services
Select Sql Server (MSSQLSERVER) item from services --> rightclick --> Stop
Open C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA
Move MSDBData.mdf & MSDBlog.ldf to any other place
Then Copy this Files Again from new place and put it in older place
C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA
In opened connection in object Explorer --> rightclick --> Start
Then Refresh DataBase.
Then you can Detach the MSDB File
The 2nd solution worked for me.
Note : I had to get "msdb" database mdf and ldf files from another working machine to get it working.
What instantly fixed my issue was to replace existing MSDBData.mdf & MSDBlog.ldf files
in C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA. I got these 2 files copied from another working machine, Stopped the SQL service running in my machine, removed the above existing 2 files from their location and added the new 2 copied. Once I restarted the service , issues has been fixed.
Try this
Set the database into single user mode:
Alter database dbname set single_user
Now set the database into emergency mode:
Alter database dbname set emergency
Repair missing log file or corrupted log file with data loss.
DBCC CHECKDB ('dbname', REAPIR_ALLOW_DATA_LOSS)
Now set the db in multi user mode;
Alter database dbname set multi_user
You may loss the data by using this command. It also depends on client's approval. To avoid this you may use some other dedicated software ( As Mentioned here ) to recover from suspect mode.