I'm using the Firebird DB and I did the following. I create a DB, fill it with a lot of records (size of the DB file 113MB). Then I delete all the records but the size remains the same. Is there any way to "shrink" or to "pack" the DB file?
The page at Re: Firebird and vacuum? claims that you need to use gbak to rebuild the database for that. Note that the file will not grow when you insert records until the total amount of data hits 113MB again, so you might just want to leave the file at that size.
Related
I'm attempting to mock a database for testing purposes. What I'd like to do is given a connection to an existing Postgres DB, retrieve the schema, limit the data pulled to 1000 rows from each table, and persist both of these components as a file which can later be imported into a local database.
pg_dump doesn't seem to fullfill my requirements as theres no way to tell it to only retrieve a limited amount of rows from tables, its all or nothing.
COPY/\copy commands can help fill this gap, however, it doesn't seem like theres a way to copy data from multiple tables into a single file. I'd rather avoid having to create a single file per table, is there a way to work around this?
I am using SQL Server 2008-r2, I have an application deployed in it which receives data every 10 seconds. Due to this the size of my log file has gone up to 40GB. How to reduce the size of the growing log file. (I have tried shrinking but it didn't work for me). How to solve this issue?
find the table that is storing the exception Log in your database, table that gets populated (and its child tables) whenever you perform operation in your application
Truncate these tables. For example:
a)truncate table SGS_ACT_LOG_INST_QUERY
b)truncate table SGS_ACT_LOG_THRESHOLD_QUERY
c)truncate table SGS_EXCEPTION_LOG
Don't truncate these type of tables on a daily basis , do only whenever the size of the database increases because of the log file size.
i)SGS_EXCEPTION_LOG (table that stores exception logs in your DB)
ii)SGS_ACT_LOG_INST_QUERY(table that stores information whenever any operation is performed on the database)
iii)SGS_ACT_LOG_THRESHOLD_QUERY(table that stores information whenever any operation is performed on the database).
I have a database(Postgres) backup which contains 100+ tables and most of them have many rows(100K) in it. But when I restored the db with the backup file(production data:- contains large volume of data) one table restored with less data nearly 300K rows are missing. Is there any possibility to happen like this or I'm missing anything?
Thanks in advance
one option could be the following. You should store your data directory from the old db in a zip file and try again. More description here
Michael
I'm running an app on a asa8 database. Since the beginning with a 'clean' database the size was 9776 kb. Now after a while and being populated with 30 tables with lots of data, the size is still 9776kb.
My question is - is there a ceiling as to how much data can be added to this db or will this file automatically increase in size when it needs to?
Thanks in advance
Alex
I am running an app on ASA8 for years. Even a empty database (regarding data) has a size because of its data structure. This is also influenced by the page size of the database.
When you begin adding data the database might be able to fill in the data into the existing pages without the need to extend the file size for some time. But you should soon see an increasing file size of the database.
ASA writes new or updated data to a log file in the same directory as the database file (default option). The changes from this log file are fed into the main database when the database server executes a so called checkpoint. Check the server log messages - a message should inform you about checkpoints to happen. After a checkpoint the timestamp of the database file gets updated. Check this too.
I need to know - Is it any possibility to restore data in collection or database if it was dropped?
The OS, by default (or in the case of Windows: any case) will not allow you to restore deleted data. You will need a third party program which can read the sectors. It is also good to note that while database drops will delete the files collection drops will not, instead they get nulled.
Dropping a collection should make it near on impossible to retrieve the data since the hard drive sectors that were used have now been overwritten with new data (basically one pass 0).
So the files may be recoverable on a database drop but that is still questionable.