I have written a procedure which will get data from some tables and update a master table. each table contains records in the range 50k - 100k. when ever i execute this procedure, the archive log files is getting generated and the size of those log files is too much for my disk to handle.. it is generating more than 30GB of log files for each execution.
What are the possible ways to handle this.
Related
I'm trying to load some large data sets from CSV into a Postgres 11 database (Windows) to do some testing. Fist problem I ran into was that with very large CSV I got this error: "ERROR: could not stat file "'D:/temp/data.csv' Unknown error". So after searching, I found a workaround to load the data from a zip file. So I setup 7-zip and was able to load some data with a command like this:
psql -U postgres -h localhost -d MyTestDb -c "copy my_table(id,name) FROM PROGRAM 'C:/7z e -so d:/temp/data.zip' DELIMITER ',' CSV"
Using this method, I was able to load a bunch of files of varying sizes, one with 100 million records that was 700MB zipped. But then I have one more large file with 100 million records that's around 1GB zipped, that one for some reason is giving me grief. Basically, the psql process just keeps running and never stops. I can see based on data files growing that it generates data up to a certain point, but then at some point it stops growing. I'm seeing 6 files in a data folder named 17955, 17955.1, 17955.2, etc. up to 17955.5. The Date Modified date on those files continues to be updated, but they're not growing in size and my psql program just sits there. If I shut down the process, I lose all the data since I assume it rolls it back when the process does not run to completion.
I looked at the logs in the data/log folder, there doesn't seem to be anything meaningful there. I can't say I'm very used to Postgres, I've used SQL Server the most, so looking for tips on where to look, or what extra logging to turn on, or anything else that could help figure out why this process is stalling.
Figured it out thanks to #jjanes comment above (sadly he/she didn't add an answer). I was adding 100 million records to a table with a foreign key to another table with 100 million records. I removed the foreign key, added the records, and then re-added the foreign key, that did the trick. I guess checking the foreign keys is just too much with a bulk insert this size.
We are creating dump file of multiple database at every one hour but the issue is storage and time of restoration. We have multiple location create dump file and using vpn to get the file. file become large day by day. Please suggest me any solution
I am using SQL Server 2008-r2, I have an application deployed in it which receives data every 10 seconds. Due to this the size of my log file has gone up to 40GB. How to reduce the size of the growing log file. (I have tried shrinking but it didn't work for me). How to solve this issue?
find the table that is storing the exception Log in your database, table that gets populated (and its child tables) whenever you perform operation in your application
Truncate these tables. For example:
a)truncate table SGS_ACT_LOG_INST_QUERY
b)truncate table SGS_ACT_LOG_THRESHOLD_QUERY
c)truncate table SGS_EXCEPTION_LOG
Don't truncate these type of tables on a daily basis , do only whenever the size of the database increases because of the log file size.
i)SGS_EXCEPTION_LOG (table that stores exception logs in your DB)
ii)SGS_ACT_LOG_INST_QUERY(table that stores information whenever any operation is performed on the database)
iii)SGS_ACT_LOG_THRESHOLD_QUERY(table that stores information whenever any operation is performed on the database).
You've heard the story, disk fails, no recent db backup, recovered files in disarray...
I've got a pg 9.1 database with a particular table I want to bring up to date. In the postgres data/base/444444 directory are all the raw files with table and index data. One particular table I can identify and it's files are as follows:
[relfindnode]
[relfindnode]_vm
[relfindnode]_fsm
where [relfindnode] is the number corresponding to the table I want to reconstruct.
In the current out-of-date database the main [relfindnode] file is 16MB.
In my recovered files, I have found the corresponding [relfindnode] file and the _vm and _fsm files. The main [relfindnode] file is 20MB, so I'm hopeful that this contains more upto-date data.
However, when I swap the files over and restart my machine (OS X) and I inspect the table, it has approximately the same number of records in it (not exactly the same).
Question: is it possible just to swap out these files and expect it to work? How else can I extract the data from the 20MB table file? I've read the other threads here regarding constructing from raw data files.
Thanks.
I'm running an app on a asa8 database. Since the beginning with a 'clean' database the size was 9776 kb. Now after a while and being populated with 30 tables with lots of data, the size is still 9776kb.
My question is - is there a ceiling as to how much data can be added to this db or will this file automatically increase in size when it needs to?
Thanks in advance
Alex
I am running an app on ASA8 for years. Even a empty database (regarding data) has a size because of its data structure. This is also influenced by the page size of the database.
When you begin adding data the database might be able to fill in the data into the existing pages without the need to extend the file size for some time. But you should soon see an increasing file size of the database.
ASA writes new or updated data to a log file in the same directory as the database file (default option). The changes from this log file are fed into the main database when the database server executes a so called checkpoint. Check the server log messages - a message should inform you about checkpoints to happen. After a checkpoint the timestamp of the database file gets updated. Check this too.