Amazon RDS PostgreSQL: Sudden increase in Read IOPS - postgresql

We are using Amazon RDS to host our PostgreSQL databases. Our production instance (db.t3.xlarge, Single-AZ) was running smoothly until suddenly Read IOPS, Read Latency, Read Throughput and Disk Queue Depth metrics in the AWS console increased rapidly and stayed high afterward (with a lower variability) whereas Write IOPS and Write Throughput were normal.
Read IOPS
Read Throughput
Disk Queue Depth
Write IOPS
There were no code changes or deployments on the date of the increase. There were no significant increases in user activity either.
About our DB structure, we have a single table that holds all of our data and in that table, we have these fields: id as UUID (primary key), type as VARCHAR, data as JSONB (holds the actual data), createdAt and updatedAt as timestamp with the time zone. Most of our data columns have sizes bigger than 2 KB so most of the rows are stored in TOAST table. We have 20 (BTREE) indexes that are created for frequently used fields in JSONB.
So far we have tried VACUUM ANALYZE and also completely rebuilding our table: creating a new table, copying all data from the old table, creating all indexes. They didn't change the behavior.
We also tried increasing storage thus increasing IOPS performance. It helped a bit but it is still not the same as before.
What could be the root cause of this problem? How can we fix it permanently (without increasing storage or instance type)? For now, we are looking for easy changes and we will improve our data model in the future.

T3 instances are not suitable for production. Try moving to another family like a C or M type. You may have hit some burst limits that are now causing odd behaviour

Related

Debezium postgres incremental snapshot performance issues

I am trying to use debezium incremental snapshots in the latest debezium (1.7) and postgres (V13). For testing, I populated a table with 1M rows, each row is 4KB with a UUID primary key and 20 varchar columns. Since I just wanted to measure snapshot performance, The table data does not change for the duration of the test
It seems that incremental snapshot is an order of magnitude slower than regular snapshots. For example, in my testing, I observed speeds of 10,000 change events per second with vanilla snapshot. Whereas, I observed speed of 500 change events per second with incremental snapshots.
I tried increasing the incremental.snapshot.chunk.size to 10,000 but I didn't see much effect on the performance.
I just wanted to confirm whether this is a known/expected issue or am I doing something wrong?
Thanks

Redshift vacuum is not reclaiming space

I have a Redshift cluster that consists of 2 nodes with 160 Gb disks.
I'm randomly getting "Disk full" error when running vacuum or any other query. My disk usage is 92%. I did delete more than a half of the old rows in table that is 10515 Mb in size, but even after rebooting the cluster there's no effect and table still of the same size, though count shows new number of rows. I should have a seen at lease small decrease in disk usage, but there's nothing.
Does anyone has any clues what it might be? Is deleting table in this case is the only option?
There are a few possibilities here but first let me check the facts. You have a 2 node dc2.large cluster and it is 92% disk full. This is too full and needs to lowered to be lowered to provide temp space for query execution. You have a table that is 10515 blocks in size. To address the disk space concern you deleted 1/2 of the rows in the table in question and then vacuumed the table. Once complete you didn't see any change to the cluster space nor the size of the table, not one block difference in table size. Do I have this correct?
First possibility is that the vacuum did not complete correctly. You mention that you are getting disk full messages even when vacuuming. So could it be that the vacuum you tried is not completing? You see vacuum need temp space to sort the table data and if you have a cluster that has gotten too full then the vacuum could fail. In this case you can run a delete-only vacuum that will not attempt to sort the table, just reclaim disk space. This will have a higher likelihood of success in a disk full situation.
Another possibility is that the delete of rows didn't complete correctly or wasn't committed before the vacuum was run. This will cause the vacuum to run on the full set of rows.
It is also possible that the table in question is very wide (many columns). This matters because of how Redshift stores data - each block is 1MB in size and each column needs a block for its data. This cluster has 4 slices and if this table is 1,500 columns wide (yes, that is silly wide) the table will take up 6,000 blocks to just store the first 4 rows. Then it takes no additional disk space to add rows until these blocks start to fill up. The table size will move in very large chunks and when removing rows the size may not change except in large chunks. This is unlikely to be what is happening if you are seeing EXACTLY the same number of blocks but if you are just seeing changes in blocks that are less than you expect this could be in play.
There could be some some other misunderstanding happening. A sort-only vacuum won't free up space. The node type isn't what I think it is. The table could live in S3 and be access through spectrum. But based on the description these don't seem likely.
UNSOLICITED ADVICE: You are on the right track by freeing up disk space but you need to take more action than reducing this one table. (I expect you realize this and this is just a start.) You should be operating below 70% disk full in most cases - this varies by workload and table sizes but is a good general rule. This means reducing a great deal of data on your disks or increasing your node count (and cost). Migrating some data to S3 and using Spectrum to access could be an option. If you need more storage w/o more compute you can look at the storage optimized nodes but since you are at the very smallest end of Redshift these likely aren't a win for you. You need to 1) remove unneeded data, 2) move some data to S3 and use Spectrum, or 3) add a node you your cluster.

Postgres Partitioning Query Performance when Partitioned for Delete

We are on Postgresql 12 and looking to partition a group of tables that are all related by Data Source Name. A source can have tens of millions of records and the whole dataset makes up about 900GB of space across the 2000 data sources. We don't have a good way to update these records so we are looking at a full dump and reload any time we need to update data for a source. This is why we are looking at using partitioning so we can load the new data into a new partition, detach (and later drop) the partition that currently houses the data, and then attach the new partition with the latest data. Queries will be performed via a single ID field. My concern is that since we are partitioning by source name and querying by an ID that isn't used in the partition definition that we won't be able to utilize any partition pruning and our queries will suffer for it.
How concerned should we be with query performance for this use case? There will be an index defined on the ID that is being queried, but based on the Postgres documentation it can add a lot of planning time and use a lot of memory to service queries that look at many partitions.
Performance will suffer, but it will depend on the number of partitions how much. The more partitions you have, the slower both planning and execution time will get, so keep the number low.
You can save on query planning time by defining a prepared statement and reusing it.

Slow bulk read from Postgres Read replica while updating the rows we read

We have on RDS a main Postgres server and a read replica.
We constantly write and update new data for the last couple of days.
Reading from the read-replica works fine when looking at older data but when trying to read from the last couple of days, where we keep updating the data on the main server, is painfully slow.
Queries that take 2-3 minutes on old data can timeout after 20 minutes when querying data from the last day or two.
Looking at the monitors like CPU I don't see any extra load on the read replica.
Is there a solution for this?
You are accessing over 65 buffers for ever 1 visible row found in the index scan (and over 500 buffers for each row which is returned by the index scan, since 90% are filtered out by the mmsi criterion).
One issue is that your index is not as well selective as it could be. If you had the index on (day, mmsi) rather than just (day) it should be about 10 times faster.
But it also looks like you have a massive amount of bloat.
You are probably not vacuuming the table often enough. With your described UPDATE pattern, all the vacuum needs are accumulating in the newest data, but the activity counters are evaluated based on the full table size, so autovacuum is not done often enough to suit the needs of the new data. You could lower the scale factor for this table:
alter table simplified_blips set (autovacuum_vacuum_scale_factor = 0.01)
Or if you partition the data based on "day", then the partitions for newer days will naturally get vacuumed more often because the occurrence of updates will be judged against the size of each partition, it won't get diluted out by the size of all the older inactive partitions. Also, each vacuum run will take less work, as it won't have to scan all of the indexes of the entire table, just the indexes of the active partitions.
As suggested, the problem was bloat.
When you update a record in an ACID database the database creates a new version of the record with the new updated record.
After the update you end with a "dead record" (AKA dead tuple)
Once in a while the database will do autovacuum and clean the table from the dead tuples.
Usually the autovacuum should be fine but if your table is really large and updated often you should consider changing the autovacuum analysis and size to be more aggressive.

MongoDB: Why would secondary members increase memory usage before the primary?

I have a MongoDB v2.4 replica set on AWS and have been monitoring my stats using MMS and dbStats(). Yesterday I saw an increase in both mapped and virtual memory usage, which correlated with an increased data fileSize and looked completely normal...except that the increase occurred on the secondaries a full two hours before it occurred on the primary (all of these members being located in the same data center).
I vaguely recall that not all members of a replica set will necessarily have the same organization of data in their data files, and I know that you can use the compact() command to defragment the files.
The only difference between the primary and the secondaries in this replica set is that, at one time, the primary was taken offline for roughly 20 minutes. It was then brought back online and re-elected as the primary.
My question is: Is there any reason to be alarmed that the primary seemed to lag behind the secondaries when increasing its mapped & virtual memory usage?