I need to maintain audit table and since the number of changes are going to be huge, I need an efficient way of dealing with the problem. The solution which I have thought is to record only the changed column in the audit table and partition it on the createdon column quarterly or half-yearly.
I wanted to know if there is anything like 'interval partition' of oracle? If not then how can I achieve it?
I want that every 6 months a new partition is created automatically as the row is inserted.
I am using postgres 11 as my db.
I do not think there is any magic configuration that make your life easier on this point :
https://www.postgresql.org/docs/11/ddl-partitioning.html
If you want the table auto-created, I think you have two major possibilities :
Verify each data at the in of the 'mother' table to see if it fits in an already present partition (trigger, if huge amount of inserts it could be a problem)
Check once in a while that you already have the partitions that are going to be needed in the future. For this one pg_partman is going to be your best ally.
As an example, few years ago, I had done a partition mechanism when there was only the declarative one and not any possibility to add pg_partman. With the trigger mechanism for 15 million rows per month it still works like a charm.
If you do not want to harm your performances EVER (and especially if you do not know how large your system is going to grow) I recommand to you the same response than in a_horse_with_no_name comment : use pg_partman.
If you cannot use it, like it was the case for me, adopt one of the two philosophies (trigger or advance table creation by crontask (for example)).
Related
I've checked documentation and saw some presentations, read blogs, but can't find examples of partitioning of more than a single table in PostgreSQL - and that's what we need. Our tables are insert only audit trail with master-detail structure and we aim to solve our problem with slow data removal problem, currently done using delete.
The simplified structure and some queries are shown in the following fiddle: https://www.db-fiddle.com/f/2mRXT4wGjM2ZSftjgKyZce/46
The issue I'm investigating right now is how to effectively query the detail table, be it in JOIN or directly. Because the timestamp field is part of the partition key I understand that using it in query is essential. I don't understand why JOIN is not able to figure this out when timestamp equality is used in ON clause (couple of explain examples are in the fiddle).
Then there are broader questions:
What is general recommended strategy for similar case? We expect that timestamp is essential for our query, so it feels natural to use it as partitioning key.
I've made a short experiment (so no real experiences from it yet) and based the partitioning solely on id range. This seems to have one advantage - predictable partition table sizes (more or less, depending on the size of variable columns, of course). It is possible to add check timestamp ... conditions on any full partition (and open interval check on active one too!) which helps with partition pruning. This has nice benefit that detail table needs single column FK referencing only master.id (perhaps even pruning better during JOINs). Any ideas or experiences with something similar?
We would rather have time-based partitioning, seems more natural, but it's not a hard condition. The need of dragging timestamp to another table and to its FK, etc. makes it less compelling.
Obviously, we want both tables (all, to be precise, we will have more detail table types) partitioned along the same range, be it id or timestamp. I guess not doing so beats the whole purpose of partitioning as we would not be able to remove data related to the master partitions.
I welcome any pointers or ideas on how to do it properly. In the end we will decide for ourselves, but there is not much material to help with the decision right now. Thanks.
Your strategy is good. Partition related tables by the common timestamp and make sure that the partition boundaries are the same.
You probably didn't get the efficient partitionwise join because you didn't set enable_partitionwise_join to on. That parameter is turned off by default because it can consume substantial query planning time that you don't want to expend unless you know you can benefit.
I'm currently expecting for somebody to advice me on the process which I'm gonna take forward for DB archiving.
I've database (DB-1) which has 2 very large tables, one table having 25 GB of data and another is 20 GB of data. Which cause major performance issues even I have indexes.
So, we considered to archive the old data with the below process,
Clone a new database (DB-2) from existing database (DB-1).
Delete the old data from DB-1, so it will have only the last 2 years records. In case If I need old data can connect DB-2.
Every month should move an old data from DB-1 to DB-2, and delete the moved rows from DB-1.
That is the wrong approach.
What you are looking for is partitioning.
You can create range partitions covering one year each. To remove old data all you need to do is to drop the partition for the year(s) no longer needed.
If you need to keep the data for some reasons, you can also just detach the partition from the table. Then the data is still "lying around", but would not show up in the (partitioned) table. You could query the (detached) partition directly to access that data. You could even move that (detached) partition to a slower harddisk to free up space on your fast disks if you have more than one.
But you might even see that partitioning alone might already improves performance, but that depends a lot on your queries.
Note that you should use Postgres 11 for that, as partitioning wasn't that sophisticated in older versions.
While you should no doubt upgrade your current version (I'd suggest moving away from the EDB system you are working on now, and going to community based Postgres 11) even if you can't upgrade, partitioning is still a much better answer than creating a second database.
By recreating your table as a set of partitions within the same database, you will be able to add/remove data in a much cleaner fashion, and it will make dealing with Vacuums much easier. Even in 9.5, you can take advantage of table inheritance to build out partitions by first adding partitions for incoming data, and then creating partitions at various intervals (probably monthly, since you want to run monthly cleanup) and moving the data into those partitions. This can be accomplished atomically with a series of INSERT INTO partition SELECT * FROM table WHERE <timestamp> style statements.
I suspect you can probably manage this yourself (you need basic sql and the ability to write simple triggers/functions... here is a link to the 9.5 docs), but if you need help, you can engage with one of the Postgres chat communities, or contact a support company if you want a deeper dive.
I have pretty old tables, which hold records for clients' payments and commissions for several years. In regular business cycle sometimes it's needed to recalc commissions and update table. But usually period of recalulation 1 or 2 months backwards, not more.
Recently, in result of bug in php script, our developer recalculated commissions since the very beggining 0_0. And the process of recalculation is really complicated so it cant be restored just grabbing yeasterday's backup - data changes in noumerous databases, so restoring data is really complicated and awfully expensive procedure. And complains from clients and change in accounting...you know..Horor.
We can't split tables by periods. (Well we can, but it will take year to remake all data selects).
What I'm trying to think about is to set up some update trigger that would check date of the changing record and allowed date that should be less then the updating date. So in case of mistake or bug, when someone would try to update such 'restricted' row it would get an exception and keep the data unchaged.
Is that a good approach? How's that can be done - I mean trigger?
For postgres you can use a check constraint to ensure the allowed_date is always less than the update_date:
ALTER TABLE mytable ADD CONSTRAINT datecheck CHECK (allowed_date < update_date);
My software runs a cronjob every 30 minutes, which pulls data from Google Analytics / Social networks and inserts the results into a Postgres DB.
The data looks like this:
url text NOT NULL,
rangeStart timestamp NOT NULL,
rangeEnd timestamp NOT NULL,
createdAt timestamp DEFAULT now() NOT NULL,
...
(various integer columns)
Since one query returns 10 000+ items, it's obviously not a good idea to store this data in a single table. At this rate, the cronjob will generate about 480 000 records a day and about 14.5 million a month.
I think the solution would be using several tables, for example I could use a specific table to store data generated in a given month: stats_2015_09, stats_2015_10, stats_2015_11 etc.
I know Postgres supports table partitioning. However, I'm new to this concept, so I'm not sure what's the best way to do this. Do I need partitioning in this case, or should I just create these tables manually? Or maybe there is a better solution?
The data will be queried later in various ways, and those queries are expected to run fast.
EDIT:
If I end up with 12-14 tables, each storing 10-20 millions rows, Postgres should be still able to run select statements quickly, right? Inserts don't have to be super fast.
Partitioning is a good idea under various circumstances. Two that come to mind are:
Your queries have a WHERE clause that can be readily mapped onto one or a handful of partitions.
You want a speedy way to delete historical data (dropping a partition is faster than deleting records).
Without knowledge of the types of queries that you want to run, it is difficult to say if partitioning is a good idea.
I think I can say that splitting the data into different tables is a bad idea because it is a maintenance nightmare:
You can't have foreign key references into the table.
Queries spanning multiple tables are cumbersome, so simple questions are hard to answer.
Maintaining tables becomes a nightmare (adding/removing a column).
Permissions have to be carefully maintained, if you have users with different roles.
In any case, the place to start is with Postgres's documentation on partitioning, which is here. I should note that Postgres's implementation is a bit more awkward than in other databases, so you might want to review the documentation for MySQL or SQL Server to get an idea of what it is doing.
Firstly, I would like to challenge the premise of your question:
Since one query returns 10 000+ items, it's obviously not a good idea to store this data in a single table.
As far as I know, there is no fundamental reason why the database would not cope fine with a single table of many millions of rows. At the extreme, if you created a table with no indexes, and simply appended rows to it, Postgres could simply carry on writing these rows to disk until you ran out of storage space. (There may be other limits internally, I'm not sure; but if so, they're big.)
The problems only come when you try to do something with that data, and the exact problems - and therefore exact solutions - depend on what you do.
If you want to regularly delete all rows which were inserted more than a fixed timescale ago, you could partition the data on the createdAt column. The DELETE would then become a very efficient DROP TABLE, and all INSERTs would be routed through a trigger to the "current" partition (or could even by-pass it if your import script was aware of the partition naming scheme). SELECTs, however, would probably not be able to specify a range of createAt values in their WHERE clause, and would thus need to query all partitions and combine the results. The more partitions you keep around at a time, the less efficient this would be.
Alternatively, you might examine the workload on the table and see that all queries either already do, or easily can, explicitly state a rangeStart value. In that case, you could partition on rangeStart, and the query planner would be able to eliminate all but one or a few partitions when planning each SELECT query. INSERTs would need to be routed through a trigger to the appropriate table, and maintenance operations (such as deleting old data that is no longer needed) would be much less efficient.
Or perhaps you know that once rangeEnd becomes "too old" you will no longer need the data, and can get both benefits: partition by rangeEnd, ensure all your SELECT queries explicitly mention rangeEnd, and drop partitions containing data you are no longer interested in.
To borrow Linus Torvald's terminology from git, the "plumbing" for partitioning is built into Postgres in the form of table inheritance, as documented here, but there is little in the way of "porcelain" other than examples in the manual. However, there is a very good extension called pg_partman which provides functions for managing partition sets based on either IDs or date ranges; it's well worth reading through the documentation to understand the different modes of operation. In my case, none quite matched, but forking that extension was significantly easier than writing everything from scratch.
Remember that partitioning does not come free, and if there is no obvious candidate for a column to partition by based on the kind of considerations above, you may actually be better off leaving the data in one table, and considering other optimisation strategies. For instance, partial indexes (CREATE INDEX ... WHERE) might be able to handle the most commonly queried subset of rows; perhaps combined with "covering indexes", where Postgres can return the query results directly from the index without reference to the main table structure ("index-only scans").
I have 3 database tables, each contain 6 million rows and adding 3 million rows every year.
Following are the table information:
Table 1: 20 fields with 50 characters average in each filed. Has 2 indexes both are on timestamp fields.
Table 2: 5 fields, 2 byte array field and 1 xml field
Table 3: 4 fields, 1 byte array field
Following is the usage:
Insert 15 to 20 records per second in each table.
A view is created by joining first 2 tables and the select is mostly based on the date field in the first table.
Right Now, insert one record each in all three table together takes about 100 milliseconds.
I'm planning to migrate from postgres 8.4 to 9.2. I would like to do some optimization for insert performance also.Also, I'm planning to create history tables and keep the old record into those tables. I have the following questions in this regard
Will create history tables and move older data to those tables help in increasing the insert performance?
If it helps, how often I need to move the old records into the history tables, daily? or weekly/monthly/yearly?
If i keep only one month (220,000) data instead of one year data (3 million) will it help in improving insert performance?
Thanks in advance,
Sudheer
I'm sure someone better informed than I will show up and provide a better answer, but my impression is that:
Insert performance is mostly a function of your indexing strategy and your hardware
Performance, in general, is better under 9.0+ than 8.4, and this may rub off on insert performance, but I'm not certain of that.
None of your ideas are going to directly affect insert performance
Now, that said, the cost of maintaining a small index is lower than a large one, so it may be that creating history tables and moving old data there will improve performance simply by reducing index pressure. But I would expect dropping one of your indexes to have a direct and greater effect. Perhaps you could have a history table with both indexes and just maintain one of them on the "today" table?
If I were in your shoes, I'd get a copy of production going on my machine running 8.4 with a similar configuration. Then upgrade to 9.2 and see if the insert performance changes. Then try out these ideas and benchmark them, see which ones improve the situation. It's absolutely essential that things be kept as similar to production as possible for this to yield useful information, but it will certainly be better information than any hypothetical answer you might get.
Now, 100ms seems pretty slow for inserting one row IMO. Better hardware would certainly improve this situation. The usual suggestion would be a big striped RAID array with a battery-backed cache. PostgreSQL 9.0 High Performance has more information on all of this.