ASA 8 database.db file size doesn't increase - sybase-asa

I'm running an app on a asa8 database. Since the beginning with a 'clean' database the size was 9776 kb. Now after a while and being populated with 30 tables with lots of data, the size is still 9776kb.
My question is - is there a ceiling as to how much data can be added to this db or will this file automatically increase in size when it needs to?
Thanks in advance
Alex

I am running an app on ASA8 for years. Even a empty database (regarding data) has a size because of its data structure. This is also influenced by the page size of the database.
When you begin adding data the database might be able to fill in the data into the existing pages without the need to extend the file size for some time. But you should soon see an increasing file size of the database.
ASA writes new or updated data to a log file in the same directory as the database file (default option). The changes from this log file are fed into the main database when the database server executes a so called checkpoint. Check the server log messages - a message should inform you about checkpoints to happen. After a checkpoint the timestamp of the database file gets updated. Check this too.

Related

How to reduce the size of the transaction log file in SQL Server

I am using SQL Server 2008-r2, I have an application deployed in it which receives data every 10 seconds. Due to this the size of my log file has gone up to 40GB. How to reduce the size of the growing log file. (I have tried shrinking but it didn't work for me). How to solve this issue?
find the table that is storing the exception Log in your database, table that gets populated (and its child tables) whenever you perform operation in your application
Truncate these tables. For example:
a)truncate table SGS_ACT_LOG_INST_QUERY
b)truncate table SGS_ACT_LOG_THRESHOLD_QUERY
c)truncate table SGS_EXCEPTION_LOG
Don't truncate these type of tables on a daily basis , do only whenever the size of the database increases because of the log file size.
i)SGS_EXCEPTION_LOG (table that stores exception logs in your DB)
ii)SGS_ACT_LOG_INST_QUERY(table that stores information whenever any operation is performed on the database)
iii)SGS_ACT_LOG_THRESHOLD_QUERY(table that stores information whenever any operation is performed on the database).

File loading issues in DB2 using Load utility

I have a .csv file, comma-delimited (located at C:/). I am using the DB2 LOAD utility to load data present in the CSV file in a DB2 table.
LOAD CLIENT FROM C:\Users\somepath\FileName.csv of del
MODIFIED BY NOCHARDEL COLDEL, insert into SchemaName.TABLE_NAME;
CSV file has 25 rows. After the utility completed I got an error message for NOCHARDEL. My table has all 25 rows properly loaded. Now when I try to execute an insert/update/delete statement on any of the tables present in that schema I am getting following error.
Lookup Error - DB2 Database Error: ERROR [55039] [IBM][DB2/AIX64] SQL0290N Table space access is not allowed.
Could you please help me whether I am making any mistake or missing a parameter that is causing lock on the table.
Earlier while loading the file similar situation occurred, where DBA confirmed that Table space in question is in “load in progress” state
Changes generated by the DB2 LOAD utility are not logged (one of the side-effects of its high performance). If the database crashes immediately after the load it will be impossible to recover the table that was loaded by replaying log records, because there are no such records. For this reason the tablespace containing the loaded table is automatically placed in the BACKUP PENDING mode, forcing you to take a backup of that tablespace or the entire database to ensure it is fully recoverable.
There are options that you can specify for the LOAD command that can help you avoid this situation in the future:
NONRECOVERABLE -- this option does not place the tablespace into the BACKUP PENDING mode, but, as its name implies, the table you're loading to becomes non-recoverable in case of a crash, and your only option in that situation will be to drop and re-create the table.
COPY YES -- this option creates a copy of the table prior to loading, which can be used to recover the table to its pre-LOAD state in case of a crash.
If you are only loading 25 records, I suggest you use the IMPORT utility instead -- it does not have these restrictions because it is fully logged (at the price of lower performance, which for 25 records won't matter).
Thanks #mustaccio. I had 60 Million rows to insert. I was using 25 as sample to check the outcome.
To add another point, we later came to know that this is a known DB2 bug that keeps the load in progress state (DB2 is unable to acknowledge that the load has completed and the session remains open indefinitely) and place the table space in backup pending state.
Recovery is the only option to release the table space once it is in pending state.
This issue is fixed in fix pack 10 as per the DB2 team (we are yet to deploy and test). Mean while NONRECOVERABLE key word is working fine for us
The reason why your table is stuck in the LOAD IN PROGRESS state is the NOCHARDEL error happening at the end of the LOAD.
Have you tried restarting the database? This should reinitialize all table spaces and remove any rogue states.
http://www-01.ibm.com/support/docview.wss?uid=swg1IC65395
http://www-01.ibm.com/support/docview.wss?uid=swg21427102

mongodb create database file automatically after certain period

Mongodb Database generate files automatically after certain period as follow
Doc.0
Doc.1
Doc.2
Doc.3
Doc.4
but Doc.ns file never regenerate like above file
I'm not sure exactly what, if anything, you are specifying as a problem. This is expected behavior. MongoDB allocates new data files as the data grows. The .ns file, which stores namespace information, does not grow like data files, and shouldn't need to.

Esent database engine limited to specific page sizes?

I had a problem opening an esent database (Windows.edb) due to some problem with the page size. The pagesize of the Windows.edb on my system is 32K. When I set this via JET_paramDatabasePageSize JetInit would return the error -1213 (The database page size does not match the engine). Laurion Burchall suggested to turn off JET_paramRecovery once I need only a ReadOnly access to the database. That solved my problem.
Until now. I have a not perfectly shutdown database. I assume that, with JET_paramRecovery=On, JetInit would automagically do the recovery and let me read the database. But if I try that I get that old -1213 error.
Now I can fix my file with ESENTUTL but the dummy user of my app won't be able to. Is there some way to have recovery on and still be able to define ANY DatabasePageSize? There are no log files present at the location of the database (and I set the Logpath to the same directory to make sure they aren't written anywhere else).
Does this mean that the engine on my machine does not support the page size or the database? Or could I solve the problem with setting another magic switch?
Running recovery on another application's database is tricky. ESENT is an embedded engine and each application can have its own settings. Before you run recovery you need to know:
Where the logfiles are located (JET_paramLogFilePath)
The logfile size (JET_paramLogFileSize)
The database page size (JET_paramDatabasePageSize)
The logfile basename (JET_paramBaseName)
If you set all those parameters correctly then recovery will work properly. If you don't do that properly then the other application may have problems recovering its database!
There is a simple (but tricky) way to "fix" an EDB database which wasn't shut down gracefully. There is the state flag in the header at offset 52. It's a 4Byte Integer which should be set to 3 (if not closed gracefully the value you will find is be probably 2).
You will probably need to repeat this entry at the 2nd database page which contains a copy of the database header. You can find that page simply be the page size of the database (usually 4096, 8192, etc).
As this is really a hack, you should use it at your own RISK!

Firebird DB size do not change

I'm using the Firebird DB and I did the following. I create a DB, fill it with a lot of records (size of the DB file 113MB). Then I delete all the records but the size remains the same. Is there any way to "shrink" or to "pack" the DB file?
The page at Re: Firebird and vacuum? claims that you need to use gbak to rebuild the database for that. Note that the file will not grow when you insert records until the total amount of data hits 113MB again, so you might just want to leave the file at that size.