I have a SAP system which has DB2 and the DB2 creates FODC files and stores the same in server under /db2dump folder. I have FODC files from past two years and its filling up my server disk space. Whether removing these files will cause any issue? Do I need to backup these files before removing them?
Related
I'm hosting my db using AWS RDS and I'm trying to backup tables. However once it's finished backing up, where is the downloaded on my computer?
Doesn't seem like theres a path to save the file
I've checked a couple of answers and others are having same issue
https://stackoverflow.com/a/29246636/11110509
The "Filename" element in that dialog box lets you pick a directory as well as file name. That is where it is. If you just typed in a filename without giving a path, then on Windows it is probably in your user's "Documents" folder.
I have a database with two schemas. The first schema was built weeks ago and have been stable ever since. The second schema under the same database in the same server is populated through an ETL process of the original schema. I built the data twice (approx 20 hours to build). I can see the schema takes the 100GB it requires from the hard drive. Upon connecting to pgadmin4 or datagrip, I can see instantly the data gets truncated (deleted) freeing up the space it takes. After the second build before connecting to anything, I made a File System Level Backup (tar file).
The first backup (3rd trial to keep the schema alive) I uncompressed the tar file and moved it to position keeping the uncompressed version of the folder in place.
I connected to pgadmin4 and the data disappeared again. Then I edited the postgres configuration data directory to point at the folder where I initially uncompressed the tar file to avoid copying and pasting in 2 hours. Launched postgres server again and boom, the schema truncates the data again.
I have no clue how or why this happens. Any advice on where to look next time before relaunching the server to pinpoint where that truncate command is fired from?
PS:
The tar file is a compression of "main" folder inside .../postgresql/11/ folder.
Thanks in advance.
I have created a Firebird multi-file database
Main Database file D:\Database\MainDB.fdb
Secondary files (240 Files) located under D:\Database\DBFiles\Data001.fdb to D:\Database\DBFiles\Data240.fdb
When copy database to another location and trying to open it Firebird doesn't locate the files if they are not in D:\ partition
I want Firebird to locate the secondary files under Database\DBFiels folder at the new path.
So if I copy the database to C:\Database\MainDB.fdb
Firebird would open Data001.fdb in new path like C:\Database\DBFiles instead of old path in D:\Database\DBFiles where they were initially created
Can that be done with Firebird? if not, then how it should be done?
Update:
Finally I found out it's not possiable to change Firebird database secondary files usign Firebird.
but I found this Firebird FAQ mention GLINK tool but It doesn't support Firebird 3.x so I didn't test it, and It's not recommended to use it even with supported versions of Firebird.
Done what exactly?
UPD. I edited the very vague original question to make clear WHAT the topic starter wants.
You can not reliably "copy files with Firebird" - Firebird is not files copying tool. You can to a degree use EXTERNAL TABLE for raw files access, but very limited and not upon the database itself.
It is dangerous practice to "copy databases" while Firebird is working, because you would only copy part of the data. The recently updated data that is in memory cache but did not yet made it on disk would be lost. The database file would be inconsistent with some data updated and some not yet. When you "copy database files" you have first to shutdown either those databases or even the whole Firebird server.
Firebird has it's own tools for moving databases around - and those are called backup/restore tools. Maybe what you need is nbackup tool, if gbak is too slow for you.
Finally, you can list files that comprise the database. You can do it via gstat utility or via "Services API" it uses. You also can select from RDB$FILES system table. However what would you do after you did it? The very access to the database makes it badly suited for consequent copying (#2). You would perhaps need to shutdown database, turn it to read-only AND single-user state, and only then attach to it and read RDB$FILES. And after copying done - you would have to de-shutdown the database. Kinda much more complex than nbackup.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gstat-example-header.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-files.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gbak.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/nbackup.html
My team and I are not professional database administrators and we were trying to copy our database from one machine to another for backup purposes. Unluckily we made a mistake of moving the data directory instead of copying it. Something unexpectedly happened during the process and the data was not moved completely. Right now we're missing data for the past one month and both machines are in the same state i.e the original and the copy don't have data for that past month. Is there any possibility of recovering this lost data, and if yes how do we go about it. PostgreSQL version is 9.4 running on centos 7.
I need to automate the creation of a duplicate db from the .bak of my production db. I've done the operation plenty of times via the GUI but when executing from the commandline I'm a little confused by the various switches, in particular, the filenames and being sure ownership is correctly replicated.
Just looking for the TSQL syntax for RESTORE that accomplishes that.
Assuming you're using SQL Server 2005 or 2008, the simplest way is to use the "Script" button at the top of the restore database dialog in SQL Server Management Studio. This will automatically create a T-SQL script with all the options/settings configured in the way you've filled in the dialog.
look here: How to: Restore a Database to a New Location and Name (Transact-SQL), which has a good example:
This example creates a new database
named MyAdvWorks. MyAdvWorks is a
copy of the existing AdventureWorks
database that includes two files:
AdventureWorks_Data and
AdventureWorks_Log. This database uses
the simple recovery model. The
AdventureWorks database already
exists on the server instance, so the
files in the backup must be restored
to a new location. The RESTORE
FILELISTONLY statement is used to
determine the number and names of the
files in the database being restored.
The database backup is the first
backup set on the backup device.
USE master
GO
-- First determine the number and names of the files in the backup.
-- AdventureWorks_Backup is the name of the backup device.
RESTORE FILELISTONLY
FROM AdventureWorks_Backup
-- Restore the files for MyAdvWorks.
RESTORE DATABASE MyAdvWorks
FROM AdventureWorks_Backup
WITH RECOVERY,
MOVE 'AdventureWorks_Data' TO 'D:\MyData\MyAdvWorks_Data.mdf',
MOVE 'AdventureWorks_Log' TO 'F:\MyLog\MyAdvWorks_Log.ldf'
GO
This may help also: Copying Databases with Backup and Restore